Research Network, Inc. in conjunction with Northrop Grumman and the U.S. Army Simulation and Training Technology Center (STTC) has been performing research into extraction of simulated sensor data for use in prototyping, training, and evaluating unattended ground systems for use by the U.S. military. As part of this research, 3D spatial audio concepts have been developed and implemented on an existing program. The 3D spatial audio simulation has tremendous utility in dismounted and mounted immersive environments used for training at STTC. This technical challenge has been thoroughly researched for many years and many approaches have been designed, developed and studied over the years but yet still a viable system is lacking which exploits the availability of high fidelity and low cost gaming engines. The basis of these studies is that while immersed into a virtual training environment, a Soldier must be able to sense the direction and distance of sound sources in the virtual environment as he moves through a virtual world. The developed concept is based on true 3D geometry computations and virtual mixers which preserve the sound source implementations in the virtual environment. Representation of the 3D spatial audio requires head instrumentation (or an instrumented facility) which can effectively sample the 3D spatial field. While such facilities exist, they have not typically been used for this application. This paper describes the implementation of 3D spatial audio as extracted from a high fidelity, low-cost gaming engine and presents the challenges overcome by use of the 3D "meta" data, sound source representation, acoustic attenuation, Doppler Effect, and other acoustic clutter reaped from the virtual environment. Designs for 3D spatial representation in real-time for immersed humans are also presented along with a novel headset design to evaluate approaches.