Looking to the future, the battlefield promises to be exceptionally complex. To maintain overmatch across competition and conflict, it is likely that warfighters will increasingly rely on artificial intelligence (AI) teammates. A key issue that emerges is the extent to which AI are trusted, and ultimately, whether AI “trust” humans. Simply put, gains in overmatch cannot be achieved in absence of a foundation of trust in other intelligences that enables collective speed, awareness, and adaptation. The advantage of interdependence is emphasized historically in the context of Mission Command, and it will continue to be true with AI partners. The challenge is that trust can be perceived as elusive and as such appear difficult to assess and train. However, we illustrate how trust can be understood by appealing to behaviorally observable indicators of trustworthiness. A programmatic strategy for human-AI trustworthiness should (a) identify actions of other intelligences that, in context, are indicative of trustworthiness, (b) describe how such actions and the context for them are observable and measurable, and (c) develop training for observation, orientation, decision and action (OODA loops) of humans and AI to utilize such observables. To ground these claims, we review evidence of how seemingly elusive human traits (e.g., character as well as competence) have been shown to be observable and trainable by leveraging the micro-experiences inherent to everyday military settings. Likewise, we illustrate how AI already exhibit behavior that is similarly observable and consequential. We emphasize the trajectories of co-learning among entities that adapt their OODA loops based on their shared experience in context. These trajectories are continually shaped, with good or bad outcomes, whether intended or not, and whether attended to or not. Yet, by leveraging observables, these considerations are neither elusive nor abstract. They are concrete and right before our eyes.
Keywords
AI, MACHINE LEARNING, MEASURES, MILITARY LEARNING
Additional Keywords
co-learning, trust