Over the past decade, the DoD has invested significant resources into developing virtual practice environments for training joint terminal attack controllers (JTACs), forward observers (FOs), and joint fires operators (JFOs). These practice environments feature realistic visuals and functional true-to-life equipment (such as binoculars, mapping tools, and radios). However, executing realistic training scenarios using this technology requires a variety of simulation operators and instructor personnel to provide role players and training support for each student receiving the training. By exploiting recent advances in artificial intelligence technology, however, these environments can be enhanced to provide a force multiplier for these personnel, resulting in a more realistic, individualized experience for the warfighter. In this paper, we describe specific components and functions of virtual JFO/JTAC training that can be enhanced by artificial intelligence technology, as well as the specific algorithms and components that can be brought to bear. Specifically, we describe methodologies for controlling air and ground support assets, such as CAS aircraft and Fire Direction Centers (FDCs), and address the use of natural language processing and generation technologies to connect those assets verbally to the student. Finally, we describe an existing effort sponsored by STTC where these technologies are being integrated and demonstrated into an existing virtual JFO training system.
Autonomy Requirements for Virtual JFO Training
1 Views