Simulation systems that pervade military training, mission rehearsal, and tactical decision making have successfully leveraged advances in computer hardware and M&S software to capture key properties of the represented world. These sophisticated systems present lifelike behavior to the user but are often difficult to use and time-consuming to learn, so that human interaction with simulation-based training applications remains awkward. This is especially true when the training system has been designed from the perspective of the backend software application rather than the human user or the cognitive task in which the user will be engaged. The result is that students are unable to lose themselves in the simulated scenario because the training system itself demands their conscious attention. For a subset of simulation-based applications the solution will involve mixed-initiative natural language dialog that lets the human mentally 'penetrate' the user interface to communicate directly with synthetic agents. Spoken natural language dialog in particular, lets the user control the simulation while keeping their eyes, hands, and focus of attention on the exercise and its representation in the simulation. This paper describes a software architecture for integrating mixed-initiative spoken dialog interaction into simulation systems, and illustrates one use of that architecture to integrate a dialog-enabled ITS with the multiplayer online game NeverWinterNightsâ„¢.
An Architecture for Incorporating Spoken Dialog Interaction with Complex Simulations
5 Views