In this paper we use natural language dialogue processing as a means to understand and assess team effectiveness. In particular, we explore the question of what dialogue-related aspects contribute to the success of a team. We use transcriptions from two military training exercises, TADMUS (U.S. Navy) and Squad Overmatch (U.S. Army), which were designed to improve team decision-making under stress. These exercises were scored by subject matter experts on a variety of indicators of team effectiveness, e.g., team development (TD), advanced situational awareness (ASA), situation updates, stating priorities, error correction, brevity, and clarity. We annotate part of the TADMUS and Squad Overmatch datasets with information about dialogue participation (addressees), content and meaning (dialogue acts), and dialogue structure (transactions). Also, we annotate Squad Overmatch with dialogue actions relevant to TD, e.g., providing information up and down the chain of command, and ASA, e.g., identifying and describing threats. We build machine learning models for automatic dialogue act labeling, and use both manually annotated and automatically extracted dialogue-related features to calculate correlations between indicators of team effectiveness and dialogue-related features. Our annotations show that requesting and providing information are strongly correlated with how teams were rated on TD and ASA, and identifying and describing threats is correlated with ratings on TD (but not ASA, probably due to data sparsity). Additionally, for each indicator of team effectiveness, there are some dialogue acts that exhibit strong correlation with that indicator. We conclude with a discussion on how our work can be extended and applied to automatically analyzing team communication and assessing team effectiveness.
Keywords
MACHINE LEARNING;TEAM TRAINING
Additional Keywords
team effectiveness, dialogue analysis, military training