NATO highlighted the possibilities of AI for military applications: “The main ingredient in any ML-application is data…military organizations may have to adapt their data collection processes to take full advantage of modern AI-techniques” (NATO, 2018). To do so, we need to change the way we currently classify learning from multiple choice questions to quality objective data (Schatz & Walcutt, 2022). Before 2019, VR in education relied on pre and post-tests, questionnaires, interviews, scales, focus groups, and observations (Çanakaya, 2019). These methods are labor intensive, consist primarily of qualitative approaches, and do not support the rich data needed to build AI into the training pipeline. In the past 5 years, researchers and engineers have been finding ways to incorporate objective data into VR systems. Human state examples include: eye tracking (Ahuja, et al., 2018), heart rate and respiratory rate (Floris et al., 2020), EEG data (Tremmel et al., 2019), and facial expression tracking (Houshmand, 2020). Commercial off-the-shelf headsets have begun integrating human state data as well as cameras and sensors that provide opportunities to understand motion, where trainees spend time, their interactions, and every nuance of how they complete their training tasks. Tracking data within a VR training exercise provides insight into who needs additional training, when AI virtual instructors could assist, the level of readiness, the likelihood of trainee success, and detecting points of failure in the field to gaps in training. This paper will address the ambiguity of what kind of data to use for different training applications, how to pull data from VR systems, how the training pipeline could use this data, and examples of how data was built into various VR training tasks developed by an Air Force Research Laboratory with lessons learned and guidance.
Keywords
AUGMENTED AND VIRTUAL REALITY (AR/VR), BEST PRACTICES, DATA, TRAINING
Additional Keywords