As the military has moved increasingly towards distributed networked environments for Command and Control Intelligence, Surveillance, and Reconnaissance (C2ISR) missions, teams often operate remotely, and decision-making is distributed. Traditionally team training involved human observers for performance assessment, diagnosis, and after-action review and other training intervention. However, with much of the communication and coordination happening electronically, key aspects of the interactions between team members are no longer accessible to these trainers. Analyzing these communications involves poring over high volumes of raw electronic data. This is infeasible in all but the smallest scales of operation. Intelligent automated performance assessment tools can be valuable cognitive aids to trainers and assist them by warehousing and analyzing team interaction data, and presenting it to them in a user-friendly manner for real time coaching and after-action review. In order to build such a system, it is important to first define a concrete model of team behavior for the domain and to define rules to assess team performance dimensions from observations of team behavior in training exercises. Research literature is rich with different models of team performance; however, these models are defined at a very abstract level and not directly useful at the level of specificity that would be needed by a rule-based artificially intelligent assessment tool. This has always been the challenge of artificial intelligence. In this paper, we will present a case study that shows the process of translating an abstract team performance model into a concrete model and the resulting performance assessment rules that can be used by an automated tool. The model is being developed to serve as a basis for an automated after-action review tool to support large team training exercises within the Marines in the area of combined arms. The paper will also discuss the lessons learned along the way.