After-Action Review (AAR) is an effective tool to evaluate and improve the performance of trainees in tactical training exercises. However, when the exercises grow in size, and might reside in several locations, providing feedback to the majority of the participants can be complicated. It requires extensive time and resources, and the review might be limited to the few most important tactical decisions made. This paper presents a model of how to automate the After-Action Review and make it easily accessible to all the participants to increase the efficiency and improve the performance of After-Action Reviews. A system built on expert models where the action of the trainees could be compared with these models can provide additional support for the trainees. However, such a system needs to automatically detect and classify discrepancies. Discrepancies between a trainee and an expert modeled agent can emerge in many situations. By minimizing the discrepancies shown in the AAR to only include the ones believed to be significant enough to decrease the performance of the trainee, the AAR will become more effective by reaching out to the majority of the participants of the exercise giving them individual performance feedback. Preliminary results of our experiments are promising and indicate that the model presented in this paper can be used to address the issues discussed above.