Assessment of complex performance in simulation environments is required for tailoring instruction, assigning competency levels or certifications based on performance in simulations, and evaluating training system effectiveness. Many systems require extensive research to create a scoring method for one scenario. Practical Assessment in Complex Environments (PACE) is a method that uses the collective wisdom of experts collected during reviews of trainees' performance to develop an objective scoring system that (a) correlates well with experts' holistic job assessment, (b) identifies performance weaknesses that guide future remediation, and (c) is easily administered, given a record of a trainee's performance in the simulation environment.
The scoring system requires that data be collected from trainees' performance within a simulation. Experts then review the samples, rank order the performances, and assign scores to reflect the quality of each sample. To capture the policies that each expert used, a panel of experts discusses the factors that led to each sample's score; typically, these factors are violations of good practice. A training psychologist links experts' critiques to elements of a prior cognitive task analysis, assigning point deductions to specific features consistent with the views expressed by experts.
When assessing the diagnostic skill of maintenance technicians troubleshooting faults deep within complex equipment simulation, PACE scores were valid: across various tests, they correlated in the .70s with time on the job. PACE scores were also reliable: after a scoring system for a scenario was created, when it was applied to new samples, the PACE scores correlated in the .80s with experts' holistic scores. PACE integrates the collective wisdom of experts within specific simulation contexts using a domain framework resulting from a cognitive task analysis.