As low cost commercial video game sensors emerge, realistic full body interactions available in the household can also be utilized to support low cost dismounted Soldier training applications. These sensors, such as the Microsoft Kinect, are designed to work with users directly facing them. However, in an environment designed to train a team, larger spaces are necessary along with the freedom to maneuver and turn in all directions. These interactions are not reliably supported with the standard household video game configuration. In this paper, the use of multiple Kinects configured around a large area is examined, giving multiple Soldiers freedom of mobility and 360 degrees turning while wearing a Head Mounted Display. Skeletal recognition algorithms are shown within the Microsoft Kinect Software Development Kit that can be merged using commercially available tools and advanced fusion algorithms to produce better quality representations of users in the real world within a virtual environment. While one Kinect will often lose tracking of parts of a user, this paper shows that several Kinects coupled with inference algorithms can produce a much better tracked representation as users move around. Furthermore, the use of depth images along with the skeletal representations was examined to optimize fusion algorithms when bandwidth is available. Finally, it is shown how these techniques are capable of taking several skeletal representations in the virtual scene and merging them together to form a virtual representation of a single user. This system expands the viability of low cost commercial solutions to Soldier training in complex virtual environments.
Multi-Kinect Tracking for Dismounted Soldier Training
7 Views