Feedback and guidance is an essential component of any learning environment so that errors in performance can be pointed out and corrected. A crucial part of this process is the ability to identify gaps in skills or knowledge and assess the underlying causes. Currently, there is little consistency in how a student is evaluated across the different training environments they will encounter in their career, whether in the school, in the field, at home or in simulated training exercises.
The Joint ADL Co-Laboratory, in cooperation with the US Navy, the US Army Research, Development and Engineering Command's Simulation Training Technology Center, and the US Army's Program Executive Office for Simulation, Training and Instrumentation, is funding the development of the Learner Assessment Data Model and Authoring Tools (LADMAT) project. This project is focused on development of an assessment data model and associated authoring tools that are capable of capturing complex assessment data across multiple learning and training systems. Specifically, the project will integrate assessment capabilities into a live simulation through the One Tactical Engagement Simulation System and with multiple virtual simulations using the Gamebryo game engine and the Delta3D game engine to ensure that training objectives are being met by the learner.
One key component of this research effort is to test and evaluate this technology as it is integrated into these dynamic learning environments. This paper will discuss in detail the complexities of assessing performance, the underlying technologies used in this project to simplify the assessment process and how they can be used to help standardize the manner in which a student is assessed through their career. It will also identify and discuss supporting technologies, standards, specifications, foreseeable challenges, best practices and lessons learned through the development and implementation of this project.