Trust in automation has long been a key human factor affecting individual and team performance. With the advent of synthetic teammates, where humans and machines must collaborate to ensure mission success, trust takes on an even more prominent role in mediating Human-Machine Team (HMT) interactions. Achieving calibrated trust—whereby humans appropriately trust an automated system or a synthetic teammate—is paramount to achieving team cohesion and performance. However, to be effective, trust needs to be measured in real-time and objectively to inform the system and automatically engage appropriate trust calibration methods to regain operator confidence. The current study describes the development of a preliminary model of trust based on a variety of operator biobehavioral markers engaged in a Search and Rescue (SAR) mission where human operators supervise intelligent Unmanned Air Vehicle (UAV) assets in a constructive synthetic environment. Biometric sensor data from a wrist-worn device as well as behavioral markers from an eye tracker were used to develop a preliminary Machine Learning (ML) model of trust using Variational Auto Encoders (VAE) and clustering analysis. Trust labels came from the Continuous Online Numerical Score (CONS) measure and Subject Matter Expert (SME) observations. Twenty-eight participants, including UAV operators and novices, participated in our experiment. Operator trust was impacted by varying the quality of flight path recommendations provided by the UAVs when searching for survivors. Results describe the reliability of our preliminary trust model, as well as correlations between biobehavioral markers, trust-related behaviors, and performance. This experiment is unique as it provides insights for developing effective ML-driven real-time and objective measures of trust. Study findings discuss additional steps required to validate and generalize our real-time model of trust.
Keywords
HUMAN FACTORS;MACHINE LEARNING
Additional Keywords
Trust, Human-Machine Teaming, Team Performance