How Simulation is Born at Jump

If you actively follow our blog, you've probably read about dozens of types of simulations that take place here at Jump. What you might not know is how they are created.

For example-the faculty of a residency program at UICOMP may wish to utilize simulation after determining their residents need more training in a particular procedure. Nursing educators, physicians, anesthesiologists, and other clinicians can also propose simulation as the strategy to address different programmatic goals.

Jump has received more than 100 curriculum submissions for simulation-based education since its inception.

What Makes Good Simulation?

An increasing number of OSF and UICOMP clinical educators are seeing the value of simulation. This has resulted in a steady growth of clinicians and educators who are coming up with simulation ideas to integrate into their current educational programs.

Jump receives up to eight proposals for new simulation-based programming every month. It's important to have a process in place to help us determine whether simulation is the best educational strategy for a learning opportunity. We've come up with a way to evaluate quality in curricular development, and drive educators to define how their programs are going to have a positive impact on health care.

Those who wish to run simulation-based programs at Jump must submit their ideas to our Curriculum Committee, and need to outline why the simulation is needed. They are also required to define how the curriculum is aligned with the institutional goals of OSF HealthCare/UICOMP. The proposals should also include the goals and objectives of the curriculum, and how the achievement of those will be measured. This process also allows the Jump operations team to determine if we have the resources to move forward with the program.

The Curriculum Committee Chair then screens and reviews the submitted materials to determine if they need to be revised before forwarding them on to the Curriculum Committee.

Rating Matrix Quality Factors

curriculum-assessment.png

Proposals that advance to the committee are then rated using a tool developed at Jump called the Curriculum Rating Matrix. The Curriculum Rating Matrix, modeled after the Six Sigma Project Selection Matrix, helps our curriculum committee rate submissions based on a number of factors. Those include the initial five quality aspects:

  • Educational rationale
  • Institutional alignment
  • Goals, objectives and assessment methods
  • Resource alignment
  • Measured educational outcomes

Rating Matrix Impact Factors

The simulation curricula are also rated for their potential ability to impact seven outcome areas relevant to our system:

  • Standardizes training
  • Promotes interprofessional or multidisciplinary learning
  • Improves educational outcomes
  • Advances education and scholarship
  • Improves patient and/or caregiver safety
  • Decreases health care costs
  • Improves clinical outcomes

Each of the 12 factors are assigned a Level of 1 to 4, with a rank of 4 being the most rigorous level for quality simulation curriculum.

Let's use the Rating Matrix on the educational rationale (what is the need for this curriculum?) of a proposed procedural simulation as an example. That rationale might be: it's required training by accrediting bodies-that's a Level 1.

Level 2 is based on informal observations. A team might say that they are uncomfortable with a certain procedure, or a faculty member might notice his or her residents struggling with that procedure in their clinical work.

Level 3: We've sent surveys out to all the residents and asked them what they feel deficient in - this provides more quantitative (though still subjective) evidence for why the program is needed.

To achieve Level 4, curriculum authors must actually measure their learners' baseline performance - e.g. they found that only 50% of incoming residents were able to successfully demonstrate a certain procedure, so better training is needed.

Level 4 is the highest level because the need for the curriculum is demonstrated by qualitative or quantitative data. That means we can better measure the impact of the simulation program by re-measuring the learners' performance after they go through it.

A rotating cohort of three different members from the Jump Curriculum Committee does curricular rating. They use the Rating Matrix to more objectively and reliably characterize the quality of proposals and provide formative feedback to curriculum authors. At a system level, it also allows the committee to identify gaps in faculty development if there are consistent deficits in the quality of curricular components.

The Need For The Rating Matrix

Tying simulation-based activities to real outcomes is the "holy grail" of simulation.

The challenge is to create a method that objectively characterizes the quality of simulation programs before implementation. Jump hopes to achieve this by using the Curriculum Rating Matrix.

We are applying the rating tool to every program we run. This includes those approved prior to the implementation of this method. The overall goal is to prospectively evaluate programs and to promote the incorporation of measurements and tools that can demonstrate the clinical and educational impacts we're striving for.

Using the Rating Matrix helps Jump drive every educator to have the same priority perspective, use resources wisely, and takes us to the next level in being a world class education and simulation center.

Featured Author

Lisa Barker, MD Lisa Barker, MD Dr. Lisa Barker is chief medical director at the Jump Trading Simulation & Education Center and clinical associate professor of Emergency Medicine at the University of Illinois College of Medicine Peoria.

Dr. Barker chairs the Jump Value Analysis Model (VAM) Committee, a multi-disciplinary group of health care system professionals with representation from Finance, Healthcare Analytics, Performance Improvement and Education in addition to Jump.

View More In This Section
Back to Page