Virtual Environment (VE) applications must afford users with mechanisms for perception and manipulation of virtual objects in order to be effective. Since VEs can only provide a subset of the cues experienced in the real world it is important to understand the impact of those cues supporting VEs. This study explores direct object interaction in personal space, in order to quantify achieved accuracy and performance time in VEs, and to provide insight into which factors contribute to these measures.
The study utilized a stereoscopic Head Mounted Projection Display (HMPD) with inherent high accuracy for the presentation of those visual cues most important in personal space, and addressed the full continuum of VE types (Immersive Virtual Environment, Mixed Reality, and Reality) and sensory modalities (visual, audio, and touch) for comprehensive evaluations. Two full-factorial across-subjects experiments were conducted and the results used to provide key insights into the effect of each type of environment and modality on accurate and timely interaction with virtual objects.
The mean depth perception error in personal space was less than four millimeters whether the stimuli presented were real, virtual, or mixed; the mean error for the simple task, button tapping, was less than four millimeters whether the buttons were real or virtual; and the mean task completion time was less than one second. The high accuracy and rapid task performance observed was attributed to the presentation accuracy of the visual cues, including occlusion, stereoscopy, accommodation, and convergence. With performance already near an optimal level with visual cues presented alone, adding proprioceptive, audio, and haptic cues did not significantly improve results.