Abstract
Purpose: This study investigates the impact of incorporating observer-reported workload into workplace-based assessment (WBA) scores on (1) psychometric characteristics of WBA scores and (2) measuring changes in performance over time using workload-unadjusted versus workload-adjusted scores.
Method: Structured clinical observations and multisource feedback instruments were used to collect WBA data from first-year pediatrics residents at 10 residency programs between July 2016 and June 2017. Observers completed items in 8 subcompetencies associated with Pediatrics Milestones. Faculty and resident observers assessed workload using a sliding scale ranging from low to high; all item scores were rescaled to a 1-5 scale to facilitate analysis and interpretation. Workload-adjusted WBA scores were calculated at the item level using three different approaches, and aggregated for analysis at the competency level. Mixed-effects regression models were used to estimate variance components. Longitudinal growth curve analyses examined patterns of developmental score change over time.
Results: On average, participating residents (n = 252) were assessed 5.32 times (standard deviation = 3.79) by different raters during the data collection period. Adjusting for workload yielded better discrimination of learner performance, and higher reliability, reducing measurement error by 28%. Projections in reliability indicated needing up to twice the number of raters when workload-unadjusted scores were used. Longitudinal analysis showed an increase in scores over time, with significant interaction between workload and time; workload also increased significantly over time.
Conclusions: Incorporating a measure of observer-reported workload could improve the measurement properties and the ability to interpret WBA scores.
Method: Structured clinical observations and multisource feedback instruments were used to collect WBA data from first-year pediatrics residents at 10 residency programs between July 2016 and June 2017. Observers completed items in 8 subcompetencies associated with Pediatrics Milestones. Faculty and resident observers assessed workload using a sliding scale ranging from low to high; all item scores were rescaled to a 1-5 scale to facilitate analysis and interpretation. Workload-adjusted WBA scores were calculated at the item level using three different approaches, and aggregated for analysis at the competency level. Mixed-effects regression models were used to estimate variance components. Longitudinal growth curve analyses examined patterns of developmental score change over time.
Results: On average, participating residents (n = 252) were assessed 5.32 times (standard deviation = 3.79) by different raters during the data collection period. Adjusting for workload yielded better discrimination of learner performance, and higher reliability, reducing measurement error by 28%. Projections in reliability indicated needing up to twice the number of raters when workload-unadjusted scores were used. Longitudinal analysis showed an increase in scores over time, with significant interaction between workload and time; workload also increased significantly over time.
Conclusions: Incorporating a measure of observer-reported workload could improve the measurement properties and the ability to interpret WBA scores.
Original language | American English |
---|---|
Pages (from-to) | S21-S29 |
Journal | Journal of the Association of American Medical Colleges |
Volume | 93 |
Issue number | 11S |
DOIs | |
State | Published - Nov 2018 |
Event | 57th Annual Research in Medical Education - Austin, United States Duration: Nov 2 2018 → Nov 6 2018 Conference number: 57 |
Keywords
- Clincial Competence*
- Educational Measurement
- Humans
- Internship and Residency*
- Pediatrics/education*
- Psychometrics
- Workload*
Disciplines
- Pediatrics
- Medical Education