Virtual Environment (VE) training systems are fast becoming the tool of choice for supporting a wide array of military training needs. As the technology advances, this trend will become even stronger, with these systems ultimately competing with more traditionally accepted methods of training for precedence in assorted curricula. One area in which this faith in VE systems is warranted is performance measurement. In particular, the data collection capabilities afforded by VEs readily lend themselves to the implementation of new, advanced measures of performance that were not possible to develop in the operational environment. The critical challenge that developers now face is identifying and validating these Virtual Metrics and determining how best to include them in the ever-expanding array of evaluation tools available to the operational training community.
The current paper explores this process, using the research, development, and evaluation phases of the Conning Officer Virtual Environment (COVE) shiphandling training system as a case study. The process to be described progressed in three stages, each of which resulted in the testing and evaluation of a novel piece of technology, which ultimately led to Virtual Metrics, Alongside Box and Command Efficiency, for evaluating shiphandling performance. In the first stages, the data collection capabilities of the COVE testbed enabled the development of an objective quantification of basic UNREP performance, which is traditionally subjective in nature, using "track histories". In the final stages, the evaluation team used these basic measures to develop the more sophisticated Alongside Box and Command Efficiency measures, which serve collective variables for quantifying shiphandling performance. Finally, these Virtual Metrics were implemented to quantify the transfer of skills learned in the COVE system to the real world UNREP task.