Abstract
CCATT training prepares medics to operate effectively with critically wounded soldiers in high-stress transport settings. This training focuses on ensuring that trainees are skilled in using life-saving equipment while ensuring the safety and comfort of critically wounded soldiers being transported to medical facilities. Patient care involves operating mechanical ventilators, IV pumps, and suction machines. Each CCATT team, consisting of a physician, nurse, and respiratory therapist, manages 2-3 patients. Situational awareness, i.e., monitoring patient status and responding to alarms, is vital to keep patients alive and heavily sedated. In this cramped, noisy environment, awareness of alarms and a quick and effective response is challenging. Simulation-based training allows CCATT personnel to practice in realistic scenarios, improving response times in a safe, manikin-based environment. However, it is difficult to analyze trainee actions and their interactions with equipment in simulations due to multi-camera views, poor lighting, occlusions, and noisy multimodal data. Traditional methods rely on instructors manually tracking and assessing trainees during simulations. This paper proposes a vision analysis-based assessment approach using a two-stage deep learning model to detect trainee-equipment interactions in CCATT training. Trainee-equipment interactions are defined by physical engagement between the trainee and the equipment. We focus on detecting trainee-machinery interactions by analyzing the trainee’s location, hand positions, body posture, equipment location, and spatial proximity. The first stage involves frame-level feature extraction, while the second aggregates features temporally to predict interaction start and end times. This approach enables automated monitoring of trainees’ response times through interaction analysis, enhancing training and evaluation.
Keywords
MIXED REALITY;TEAM TRAINING