UNIT 5 – Different aspects of assessment in SBME

Simulation-based medical education (SBME) has burgeoned into a cornerstone for training healthcare professionals, from students to seasoned practitioners. Through high-fidelity manikins, computer-based simulations, and virtual reality, learners engage in a safe and controlled environment that fosters skill acquisition, decision-making, and reflective practice, without compromising care [1]. Yet, for this educational approach to be efficacious, its assessment methodologies must be both rigorous and robust.

In this text we provide the reader with a list of topics and suggested readings about the evaluative frameworks underpinning SBME, emphasizing the methodologies to ascertain the validity, reliability, and educational impact of these simulation modalities.

 

Evaluation Methods in Simulation-Based Medical Education

Direct Observation – One of the most traditional methods, where an instructor observes a learner’s performance during a simulation and provides feedback.[2]

Checklists – Standardized lists of actions or considerations specific to a scenario or skill, allowing for consistent and objective evaluation.[3]

Global Rating Scales – General assessments of performance often based on broader categories like “communication” or “clinical reasoning”.[4]

Self-assessment – Encourages reflective practice and helps identify areas for improvement from the learner’s perspective.[5]

Video-assisted Debriefing – Utilizes video recordings of the simulation to facilitate feedback and discussion.

360-degree Feedback – Collects evaluations from multiple sources, including peers, instructors, and sometimes even standardized patients.

Ensuring the quality, efficacy, and relevance of simulation experiences requires meticulous evaluation. The diverse methodologies encompassing this evaluative spectrum are detailed below.

 

Kirkpatrick’s Four-Level Model

The Kirkpatrick Model is a widely used model for evaluating the effectiveness of training programs. It consists of four levels of evaluation: Reaction, Learning, Behavior and Results. The model specifies that each level should be evaluated in order, using data from the previous levels to inform the next level’s evaluation. Adapted for SBME, Kirkpatrick’s model [6] provides the following hierarchy for evaluating our training programs:

Level 1 – Reaction: Measures learners’ satisfaction and perceived relevance.
Level 2 – Learning: Assesses knowledge, skills, and attitude changes.
Level 3 – Behavior: Evaluates transfer of skills to the clinical setting.
Level 4 – Results: Measures patient outcomes and healthcare system impact.

While reaction (Level 1) is easy to measure, results (Level 4) is the most challenging as it requires isolating the effects of training amid other organizational factors.

 

Formative vs. Summative Evaluation

Formative Evaluation – This ongoing feedback helps learners identify areas of improvement during their training. Examples include debriefing sessions and constructive feedback during or immediately after simulation scenarios [7].

Summative Evaluation – Used to assess a learner’s competency, typically at the end of a training program. Examples include objective structured clinical examinations (OSCEs) and high-stakes certification assessments [8].

 

Objective Structured Clinical Examinations (OSCEs)

OSCEs, traditionally used in clinical exams, have been adapted for SBME. They provide standardized scenarios where learners’ clinical skills are assessed using specific criteria, ensuring both reliability and objectivity [9].


 

Talking about assessment we usually think about the evaluation of learners. Assessment however is an activity that it is important to carry on also on the side of the educators, evaluating what we do and how we do it.

 

Fidelity Assessment

Fidelity, the degree to which the simulation replicates reality, is pivotal. High-fidelity simulations, like manikin-based simulations, are compared against low-fidelity tools, like task trainers, to discern the impact on learning outcomes [10].

 

Feedback and Debriefing Evaluation

Post-simulation debriefing is vital for reflection and learning. Evaluating the quality of debriefing, through tools like the Debriefing Assessment for Simulation in Healthcare (DASH), ensures effective feedback and learner insight [11].

 

Validity and Reliability in SBME

Ensuring validity and reliability is paramount. The Messick framework, a predominant approach in SBME, integrates various validity types, including content, response process, internal structure, relation to other variables, and consequential validity [12].

Reliability, on the other hand, emphasizes consistency. Generalizability theory, which assesses the reliability of performance assessments in SBME, is instrumental in this domain [13].

 

 

Challenges and Future Directions

Despite its potential, medical simulation evaluation isn’t without its challenges. Some of these include:

The potential for observer bias in direct observation methods.
Difficulty in standardizing checklists and rating scales across different institutions.
Balancing the depth and breadth of feedback to maximize educational impact

Though SBME has transformative potential, challenges like technological costs, faculty development, and scenario standardization persist. Furthermore, while evaluation methods are advancing, more research is needed to correlate simulation proficiency directly with improved patient outcomes.

SBME stands as a paragon of modern medical education, synthesizing experiential learning with patient safety. However, its true value is contingent upon rigorous evaluation methods, ensuring that healthcare professionals are not just trained, but are competent, reflective, and patient-centered.

 

References

[1] Issenberg, S. B., McGaghie, W. C., Petrusa, E. R., Lee Gordon, D., & Scalese, R. J. (2005). Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Medical teacher, 27(1), 10-28.

[2] McGaghie, W. C., Issenberg, S. B., Cohen, E. R., Barsuk, J. H., & Wayne, D. B. (2011). Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Academic Medicine, 86(6), 706-711.

[3] Dieckmann, P., Gaba, D., & Rall, M. (2007). Deepening the theoretical foundations of patient simulation as social practice. Simulation in Healthcare, 2(3), 183-193

[4] Van der Vleuten, C. P. M., Schuwirth, L. W. T., Driessen, E. W., Govaerts, M. J. B., & Heeneman, S. (2010). 12 Tips for programmatic assessment. Medical Teacher, 32(6), 482-485.

[5] Rudolph, J. W., Simon, R., Dufresne, R. L., & Raemer, D. B. (2007). There’s no such thing as “nonjudgmental” debriefing: A theory and method for debriefing with good judgment.

[6] Yardley, S., & Teunissen, P. W. (2017). Kirkpatrick’s levels and education ‘evidence’. Medical education, 51(5), 498-502.

[7] Motola, I., Devine, L. A., Chung, H. S., Sullivan, J. E., & Issenberg, S. B. (2013). Simulation in healthcare education: A best evidence practical guide. AMEE Guide No. 82. Medical Teacher, 35(10), e1511-e1530

[8] Ziv, A., Wolpe, P. R., Small, S. D., & Glick, S. (2003). Simulation-based medical education: An ethical imperative. Simulation in Healthcare, 1(4), 252-256.

[9] Harden, R. M., Stevenson, M., Downie, W. W., & Wilson, G. M. (1975). Assessment of clinical competence using objective structured examination. BMJ, 1(5955), 447-451.

[10] Maran, N. J., & Glavin, R. J. (2003). Low- to high-fidelity simulation – a continuum of medical education?. Medical education, 37, 22-28.

[11] Brett-Fleegler, M., Rudolph, J., Eppich, W., Monuteaux, M., Fleegler, E., Cheng, A., & Simon, R. (2012). Debriefing Assessment for Simulation in Healthcare: development and psychometric properties. Simulation in Healthcare, 7(5), 288-294.

[12] Cook, D. A., Brydges, R., Ginsburg, S., & Hatala, R. (2015). A contemporary approach to validity arguments: a practical guide to Kane’s framework. Medical Education, 49(6), 560-575 https://doi.org/10.1111/medu.12678

[13] Brennan, R. L. (2001). Generalizability theory. Springer-Verlag.