Japanese
English
- 有料閲覧
- Abstract 文献概要
- 1ページ目 Look Inside
- 参考文献 Reference
目的:臨床実習後客観的臨床能力試験(Post-OSCE)で用いるルーブリックの信頼性を中心に検討することを目的とした.
方法:臨床実習を終了しPost-OSCEを受験した言語聴覚士養成課程の学生25名に行ったルーブリック評価を対象とした.ルーブリックの評価者間信頼性を検証するために,評価者ペアごとの一致率および重み付けkappa係数(以下,κ係数)を算出した.評価者ペアとは,2名の評価者が同一の学生を評価する際の組み合わせを指す.
結果:評価者ペアごとの項目平均では,成人言語障害,小児言語障害,摂食嚥下障害の項目平均が一致率46.7〜75.0%,重み付けκ係数0.40〜0.64であった.全体の項目平均は一致率63.2%,重み付けκ係数0.53であった.各領域の項目平均,全体の項目平均ともに中等度の一致性を認めた.
考察:評価者ペアごとの一致率やκ係数が低値であった評価項目については,事前に評価基準の確認と調整を行うことが,信頼性の向上のために必要と考えられた.
This study aimed to examine the reliability of the rubric following post-clinical training objective structured clinical examination (Post-OSCE) in a speech-language-hearing therapist (SLHT) training course. Inter-rater reliability was measured using the agreement rate and weighted kappa coefficient (κ coefficient) for each pair of raters for the clinical areas of adult language disorder, childhood language disorder, and dysphagia. Data from 25 students in the SLHT training course were analyzed. The agreement rate for the three clinical areas ranged from 46.7% to 75.0%, with weighted κ coefficients between 0.40 and 0.64. The average overall concordance rate was 63.2%, and the average weighted κ coefficient was 0.53. Both the agreement rates for each pair and the overall average demonstrated moderate levels of agreement. These findings indicate that it is critical for the paired raters to confirm in advance the criteria of evaluation for the items which received low agreement rate and κ coefficient.

Copyright © 2025, Japanese Association of Speech-Language-Hearing Therapists. All rights reserved.