Whole-body medical manikins that respond to interventions through physiological modeling are often considered to
be the best available alternative to live tissue for simulation-based medical training. However, a physical manikin
that provides essential perceptual cues and supports all of the key procedures and clinical decisions for a complex
patient case is not always available or cost-effective. An integrated, blended reality medical simulation system that
adds virtual simulations to physical simulations can offer a significant advantage over manikin-only simulation by
providing full-body visual and aural cues for the patient's appearance and behaviors, while a manikin or part-task
trainer provides the haptic cues needed to train psychomotor skills for targeted procedures. This persistent virtual
simulation can maintain and present a coherent representation of the patient while selected procedures are performed
on physical manikin modules.
In order to demonstrate the feasibility and effectiveness of this multi-modal approach to medical simulation, a
software and data communications infrastructure, in which various aspects of a simulation can be developed as a
federation of interoperable, multi-modal modules, was created. The efficacy of this architecture was then
demonstrated in multiple module configurations, including through the Center for Research in Education and
Simulation Technologies (CREST) team’s Advanced Modular Manikin (AMM) Phase I prototype, in which 15
modules of various modalities were successfully integrated. This paper will review the architectural concepts used
and results achieved, and describe how the approach to interoperability can be leveraged to close gaps in current
medical training.