An effective student performance review strategy is to provide positive feedback before providing critical guidance, then to intersperse positive feedback throughout the review. The amount of positive feedback must be balanced against the necessity to continuously impart current and relevant information. An early emphasis of positive feedback helps to engage the student, and variably reinforced positive feedback maintains that engagement, resulting in the student remaining open to critical learning content. This demands a high degree of interactivity throughout the review process, a strategy applicable to human instructors and automated intelligent tutoring systems.
This paper describes a strategy for integrating automated, interactive After Action Reviews (AARs) with simulations to provide student-tailored feedback based on positive, session-specific information. The underlying methods rely on the meta-relations among hierarchies, including learning objectives, demonstrated student achievements and weaknesses, simulation events, and scenario-to-learning objective mapping. The generated AAR output allows the student to drill down to specific details of the AAR, explore how student decisions impact results, and obtain recommendations for learning objective--specific remediation. The approach presumes both that the simulation is assessing multiple learning objectives from a single scenario and that a cross-linkage of learning objectives cuts across multiple lessons, systems, and disciplines. These intelligent tutoring strategies were derived using a Force XXI Battle Command for Brigade and Below (FBCB2) simulation, which provided training on the installation, operation, and maintenance of the FBCB2 system of systems, including not only the AN/UYK 128 computer, but also the associated Single-Channel Ground and Airborne Radio System and Enhanced Position Location Reporting System radios and the Precision Lightweight GPS Receiver global positioning system.