The Dismounted Soldier Training System (DSTS) is a program of record with systems fielded by PEO STRI throughout the US Army. The system provides a hardware platform that instruments each Soldier trainee with eight worn Inertial Measurement Unit (IMU) based motion tracking sensors and a motion tracked, instrumented weapon. The U.S. Army Research Laboratory, Human Research and Engineering Directorate, Simulation and Training Technology Center (ARL-HRED-STTC) is performing research and development to leverage the motion tracking capabilities of the DSTS system as well as emerging motion tracking technologies to develop a more seamless and natural fusion of soldiers’ physical movements with their body movement within the virtual environment and interactions with objects in it. Achieving this objective requires the injection of real-time data from the motion tracking system into the animation system of the underlying game engine in order to control the virtual avatar. Game engine frameworks provide mechanisms that support injection through features such as forward and inverse kinematic solvers and animation blending. Individually, these features are adequate to support simple representations of the soldiers’ actions, but more complex actions require a fusion of techniques. This paper describes our approach to solving the challenges in fusing many animation techniques together towards the goal of suspension of disbelief that the virtual avatar’s motion is entirely the motion of a single Soldier.