The U.S. Army Research Laboratory-Human Research and Engineering Directorate, Simulation and Training Technology Center (ARL-HRED STTC) performs research and development in the field of creating realistic, individualized virtual avatars from live subjects that retain the physical characteristics and appearance of the subject including height, weight, skeletal dimensions, body morphology and facial/body appearance. While photogrammetric extraction technologies are maturing there are a number of additional steps which must be performed to “virtualize� live humans into game ready avatars. Game ready in this context means the mesh stretches properly with motion, there are sufficient level-of-detail options, and the number of polygons is optimized for computer rendering in real-time on commercial graphics adapters and central processing units. Photogrammetric algorithms which extract mesh information from 3D subjects also do not inherently include the underlying bone structure (rigging) required for avatars to move in virtual environments. A novel integrated system approach developed leverages low-cost data capture systems and targets automation of all the steps necessary to go from live human to a high-fidelity game-ready avatar. This paper discusses the different trade spaces associated with various photogrammetric techniques/algorithms, commercial software packages, data capture approaches, subject lighting, frame occupancy, motion during data collection impacts, and converting what is originally a very dense mesh through “retopologization� into optimized levels-of-detail which are properly weighted to a virtual bone system. Each step in the process is discussed along with approaches for automation and the associated trade spaces which affect the quality of the outcome.
Virtualizing Humans for Game Ready Avatars
12 Views