Measuring performance, whether to assess the results of training or select personnel for promotion, has never been more important. Organizations have generally been successful at defining criteria for satisfactory performance. Developing ways of measuring performance that are valid, reliable and economical to administer have been more problematic. Human evaluators are often costly and have low inter intra-/rater reliability. While automated assessment tools may be cost-effective and consistent across ratings, they tend to be limited in their understanding of the domain they are assessing. Hence, the quality of their assessments has been questioned.
Research Development Corporation has developed PC-based automated performance assessment technology. The testbed is in-patient care provided by medical technicians. The technician being assessed performs required tasks in a simulated environment and is assessed according to criteria established in the Air Force Career Field Education and Training Plan (CFETP). The system uses both the technician's behaviors during the simulation and responses to questions presented by the tool to assess performance. The tool outputs a score that is based on the CFETP scoring system and an explanation of the score to support training.
The technology seeks to overcome limitations of other scoring systems by incorporating an expert model of the task being performed. The system represents the knowledge usin g RDC's integrated knowledge structure (INKS) framework that contains knowledge of causal principles, goal and planning knowledge, procedures and factual knowledge (which correspond to the knowledge types outlined in the CFETP). The tool runs the technician's behaviors through its expert model to determine whether the technician's solution meets the task requirements. It follows up with questions based on the INKS knowledge types to insure that the technician not only can perform the task, but has the deeper understanding of its underlying concepts and principles.