Extending human sensory capabilities beyond their natural ranges (e.g., ultrasound, night vision) and even to entirely different senses (e.g., particle radiation, magnetism, etc.) has vastly increased our abilities to gather data and make better decisions. Such sensing technologies are critical in fields as diverse as medicine, geology, astronomy, ecology, and defense. However, for humans to interpret data from these sensors, the output must be translated into something humans can perceive with their natural senses. One such example is Synthetic Aperture Radar (SAR), a remote sensing technology used to create imagery of the earth's surface. In SAR, signals from satellite-borne microwave emitters and sensors are typically converted to flattened grayscale images for humans to examine. The process yields imagery that violates multiple fundamental properties and expectations of human visual experience. Geometric distortions, multipath reflections, and other phenomena in the images make SAR extremely difficult for humans to interpret. To address this challenge, our interdisciplinary team developed a perceptual training platform that enables humans to interactively view SAR images taken from multiple sensor angles. During natural development, humans learn to understand the correspondence between 2D visual images on the retina and 3D physical shapes by moving around these objects in the real world. Our interactive virtual SAR environment enables a trainee to engage in a similar perceptual learning process with SAR imagery in a variety of modalities including augmented reality (AR). We will report results of a prototype system study, including user interaction data and effects on trainees' speed and accuracy identifying objects in SAR images. Results reveal how human visual perception can adapt to peculiarities of highly nonliteral sensor outputs, such as SAR, with interactive perceptual training. In addition to presenting findings, we will demonstrate the SAR-AR training platform and discuss how similar systems can be used for training other forms of non-literal image analysis.
Keywords
3D;AUGMENTED AND VIRTUAL REALITY (AR/VR);HUMAN PERFORMANCE;MODELING;SIMULATIONS;TRAINING;VISUALIZATION
Additional Keywords