Virtual Environment (VE) technologies have often been hailed as the ultimate solution for providing comprehensive, affordable and flexible training. Yet, despite the enormous amounts of time and money invested in the development of these devices, the results have, by and large, failed to live up to their expectations. This is likely due to significant mismatches between the virtually presented environment and the anticipated real world one, resulting in a modern day version of Osgood's (1949) Similarity Paradox. Recent work indicates that there are at least two factors that limit the 'spectral-range' of current VE training experiences. First, VE systems supply information primarily through visual and non-spatialized audio channels, limiting the quantity and quality of information being conveyed to the trainee. Second, current technologies create an environment in which much of the experience is highly scripted, failing to deliver a user-specific training experience. This suggests that there is much to be gained by recasting Osgood's challenge as an Information Processing problem.
This paper will focus on a "sensory-multiplexing" approach being developed to create adaptive VE training systems that optimize user cognitive and emotional engagement and that naturally direct the user towards appropriate learning strategies. Two lines of investigation are currently being pursued. The first focuses on developing a VE-based training system, using human-centric design principles, to provide Marines with training in Close Quarters Battle (CQB) at both the individual and team level. The second focuses on demonstrating, in the laboratory as well as operationally, that objective measures of attention, arousal, and cognitive workload can be gleaned from the output of non-invasive physiological sensors. When the results from the two efforts are integrated into a single system, the resultant information could be used to adaptively titer the users level of arousal and to direct or re-direct his/her attention as needed.