Abstract
Decision-making in high-stakes environments depends on effectively managing user cognitive load and minimizing uncertainty. Artificial Intelligence (AI)-enabled decision-support tools are increasingly used in military and civilian settings to assist with complex cognitive tasks and reduce the burden on human decision-makers. Understanding how trust is formed and how alignment between AI and users influences this process is critical to ensuring AI technologies are effectively integrated into the decision-making cycle and are trusted and relied upon appropriately.
This study examines how explanations of AI decision-making influence user trust in AI within a medical triage context. We are interested in whether explanations of the AI’s recommendations enhance trust, particularly when users perceive themselves in alignment with the AI (i.e., they would make similar decisions). We employed an experimental design using survey data of civilians with no prior medical or triage experience (n = 96). Participants evaluated and made treatment decisions in triage scenarios, under conditions that varied based on the presence of AI-assisted explanations. A random-effects multilevel ordinal logistic regression model revealed significant correlations between alignment scores and AI-user trust. Respondents who aligned with the AI were 1.87 times as likely to have a higher trust score; absence of explanations made respondents 0.64 times less likely to have a higher trust score. These findings suggest that alongside explanation, alignment is also a key factor in trust calibration.
We integrated these findings with recent work on decision-making in high-stakes environments to present a framework of human-AI trust in conditions of high-uncertainty, high-stakes decisions. This framework aims to inform development of AI systems that better align with human decision-makers, enhance trust calibration, and optimize AI integration into critical decision workflows. We discuss factors that enhance alignment between medical triage decision-makers and AI in conflict zones and mass casualty events, and provide implications for AI deployment, design, and user training.