There are a number of assessment data models that support authoring of objective (question-based) tests. Many of these include an intelligent tutor that can present "hints," remedial material presentation, or advanced placement, providing tailored feedback in a timely and cost effective manner. Developing effective assessment and decision support models in the "free play" environment of a simulation-based exercise is more difficult, and disparate assessment models can result in inconsistent training. Current research in Naturalistic Decision Making (NDM) investigates the strategies people use in performing complex, ill-structured, and high-stakes tasks under time pressure, uncertainty, and in the context of organizational constraints. In many dynamic, uncertain, and fast-paced environments, there is no single right way to make decisions. Thus, the NDM approach typically studies experts to define quality decision making and describe good decision-making processes. This paper outlines an objective system to teach and assess these NDM skills, both for the individual and as a team.
The goal of our system is to train an individual from novice level to expert starting with little or no exposure to the target domain. We will define our system in three phases: knowledge, skills, and abilities, to correlate with Bloom's taxonomy of learning domains. In the knowledge stage, the novice gains understanding of the domain. In the skills stage, the trainee translates knowledge into behavioral demonstrations of the material. In the abilities stage, the trainee applies the skills to make decisions in a real world team environment with uncertainty, time constraints, and organizational constraints. The system will assess all phases from a single knowledge base, providing consistent training across multiple presentation methods and levels of expertise and helping students build a pattern base that allows them to better make recognitional decisions when real world challenges arise.