Artificial intelligence (AI) is playing an increasingly important role in the Combatant Command Cyber Protection Team’s (CCMD CPT) planning process. With petabytes of past cyber incident data available, AI can be a useful tool to understand the complex relationships within system components, vulnerabilities, threats, and implications on future missions. Since such AI often works alongside human cyber operators to support mission commanders in decision-making, the understanding of the AI’s decisions and the rationale behind such decisions can be key to the success of this human-AI team. An analyst or operator often need to explain analysis by the AI to a commander when recommending course of actions. It is critical to make core decision factors, assumptions, uncertainties and the variables that drove the analysis accessible to the human.
In this project, we designed a simulated cyber analyst to advise mission planners on the target network systems in terms of what has happened (incidents, vulnerabilities, threat presence), likely follow-on adversary activities, and where to monitor, harden, or counteract those. We synthesized a dataset that include incident reports on past attacks on military networks. Various AI techniques with varied explainability were applied to analyze the dataset to determine vulnerabilities of a set of simulated target networks. We then developed a series of explanations algorithms for each AI techniques to explain the policies and how such policies are learned. Such explanations were fed into the simulated cyber analyst to justify its vulnerability analysis and recommendations on course of actions. The cyber analysis is placed in an experiment testbed with simulated target networks to study how such explanations impact human-automation team performance. In this paper, we will discuss our research into how transparency communication provided by AI in a simulated cyber analyst can impact human-AI teaming in cyber operations.
Keywords
Additional Keywords