Display options
Share it on

Crit Care Med. 2016 May;44(5):948-53. doi: 10.1097/CCM.0000000000001604.

Validity and Feasibility Evidence of Objective Structured Clinical Examination to Assess Competencies of Pediatric Critical Care Trainees.

Critical care medicine

Briseida Mema, Yoon Soo Park, Afrothite Kotsakis

Affiliations

  1. 1Department of Critical Care Medicine, Hospital for Sick Children, Toronto, ON, Canada. 2Department of Paediatrics, Faculty of Medicine, University of Toronto, Toronto, ON, Canada. 3Department of Medical Education, University of Illinois at Chicago, Chicago, IL.

PMID: 26862709 DOI: 10.1097/CCM.0000000000001604

Abstract

OBJECTIVE: The purpose of this study was to provide validity and feasibility evidence for the use of an objective structured clinical examination in the assessment of pediatric critical care medicine trainees.

DESIGN: This was a validation study. Validity evidence was based on Messick's framework.

SETTING: A tertiary, university-affiliated academic center.

SUBJECTS: Seventeen pediatric critical care medicine fellows were recruited in 2012 and 2013 academic year.

INTERVENTIONS: None. All subjects completed an objective structured clinical examination assessment.

MEASUREMENTS AND MAIN RESULTS: Seventeen trainees were assessed. Simulation scenarios were developed for content validity by pediatric critical care medicine and education experts using CanMEDS competencies. Scenarios were piloted before the study. Each scenario was evaluated by two interprofessional raters. Inter-rater agreement, measured using intraclass correlations, was 0.91 (SE = 0.09) across stations. Generalizability theory was used to evaluate internal structure and reliability. Reliability was moderate (G-coefficient = 0.67, Φ-coefficient = 0.52). The greatest source of variability was from participant by station variance (40.6%). Pearson correlation coefficients were used to evaluate the relationship of objective structured clinical examination with each traditional assessment instruments: multisource feedback, in-training evaluation report, short-answer questions, and Multidisciplinary Critical Care Knowledge Assessment Program. Performance on the objective structured clinical examination correlated with performance on the Multidisciplinary Critical Care Knowledge Assessment Program (r = 0.52; p = 0.032) and multisource feedback (r = 0.59; p = 0.017), but not with the overall performance on the in-training evaluation report (r = 0.37; p = 0.143) or short-answer questions (r = 0.08; p = 0.767). Consequences were not assessed.

CONCLUSION: Validity and feasibility evidence in this study indicate that the use of the objective structured clinical examination scores can be a valid way to assess CanMEDS competencies required for independent practice in pediatric critical care medicine.

MeSH terms

Publication Types