The effectiveness of an after action review (AAR) relies heavily on expert trainers who visually observe, recognize, and record examples of trainees' positive and negative behaviors that occur during the course of the simulation (Salter, et al., 2005). This method is not cost effective for large training events since they require a large staff of experts whose recall of events is often limited by the fact that several simulation-based training exercises are conducted before debriefing begins (Freeman, et al., 2004).
This paper presents a framework for increasing the efficiency of instructor-led AARs by helping trainees diagnose, recall, understand and generalize their own performance. This framework was applied in the development of the After Action Review Console (AARC) which provides trainees with simulation playback, a graphical representation of the decision-making processes of automated intelligent entities, and a visual representation of automated intelligent entities' perceived environments. The visual representation of the behavior adds context to help the trainer assess the trainee's performance. The representation also enables data harvested during a training exercise to be organized and filtered, allowing the instructor to focus attention on key decisions and actions performed by the trainee. We also present a mechanism by which the best practices of experts, captured through a formalized knowledge capture methodology, can be used to develop automated entities for simulation execute well-defined tactics, maneuvers, and reactions. The knowledge capture approach enables the transparent knowledge representations that are essential to the AAR approach. This approach has the potential to improve the efficiency of training by reducing instructor workload and improving feedback to trainees.