One of the challenges in developing intelligent, automated after action review (AAR) capabilities for simulation based training systems is the identification of causal explanations for significant events or performance errors detected during an exercise. Automated evaluation methods which use only the raw data from observable simulation events could yield limited training benefits, compared to intelligent evaluations that go one step further by identifying causal linkages with the preceding actions of the training participants - where, when, how, and by whom decisions were made and executed. This concept is being applied in the development of automated AAR tools for the Marine Corps' Combined Arms Command and Control Trainer Upgrade System (CACCTUS) program, which provides a distributed training environment for USMC command, control and coordination in combined arms operations. This paper describes a developmental approach and set of methodologies not only for detecting errors or training points in exercises conducted in this architecture, but also specifically for providing causal explanations. The methodology requires definition of a catalog of the training points to be detected and explained, in order to establish the indexing structure for all rules that can be triggered to determine root causes. This structure is applied in the implementation of a ruleset to capture the logic for linking significant training points to data from the command and execution chain. Since radio and voice communications are an integral part of this chain, a natural language processing capability is required to detect and parse key spoken transmissions and to apply reasoning to establish causal relationships with detected simulation events. Traceable data from distributed C4I tools and direct simulation commands also serve as additional inputs to the explanation rules. By incorporating causal explanations into an automatically generated AAR, the result is meaningful feedback that the training audience can directly apply.