Simulation-based training requires realistic simulated friendly and opposing forces. Realistic graphics and physics alone are not enough; the tactics exhibited must be realistic for most learning to take place. Current approaches for driving the behavior of simulated forces include live human role-players, Semi-Automated Forces (SAFs), and intelligent/cognitive automated forces. Each of these approaches represents trade-offs between realism and various resource costs. Human role-players can provide maximal realism, but trained experts are a limited and potentially costly resource. Other approaches provide varying degrees of realism in exchange for varying costs associated with programming.
In this paper, we address another approach to simulated forces that aims to achieve increased realism at lower programming cost. Trainable Automated Forces (TAF) are computer-generated agents that mimic tactics demonstrated by human experts. First, a subject matter expert demonstrates the desired behavior (e.g., piloting an aircraft) in a simulator. Next, machine-learning algorithms are used to model the observed behavior. Finally, TAF controls a simulation entity using the model to predict what the human expert would do in the same situation. When TAF behaves incorrectly, the expert can step in to demonstrate the correct actions for the situation. This process can be repeated at any time with minimal help from technical experts, allowing TAF to generalize to a wider variety of situations over time.
We report here on the design and implementation of a prototype TAF capability, including both user interface design and experience with machine learning. In addition, we discuss the potential capabilities and limitations of TAF, surveying the inherent strengths and weaknesses of the general approach relative to other implementation techniques for automated forces in simulation-based training.