Artificial intelligence (AI) has become an essential element of the modeling and simulation industry and particularly military training and education. AI (noun) is a human designed and produced ability (as opposed to a naturally occurring ability) to learn, sense (take in information and judge), think abstractly and apply knowledge and skill to favorably manipulate its environment to achieve its goals (van Lent, 2019). AI approaches take many forms (e.g., machine learning, intelligent agents, computer vision, and natural language understanding and generation).
AI is used to create computer-based augmentations for training and education: intelligent forces, virtual characters, instructional guides and coaches, and methods of performance assessment. AI is also used to model trainees and make optimal decisions about recommendations, feedback and support provided by computer-based instructors or adjustments to scenario difficulty based on learner performance and predicted success.
Military acquisition agencies regularly solicit and receive AI solutions as part of deliverable training simulations, and it is often tedious to validate the effect of these delivered solutions on military learning, performance and readiness (Toubman, 2019; Ovalle, 2019; Campbell & Bolton, 2005; Wallace & Laird, 2003). Evaluations often take the form of formal learning effectiveness studies involving human participants, extensive institution reviews, data collections and finally application of analysis methods. This paper examines intelligent agent-based methods to accelerate the evaluation of the effectiveness of AI implementations in training simulations. The goal of this research is to discover innovative methods that are accurate measures of effectiveness without the need for extensive research studies or training effectiveness evaluations. The paper seeks to answer the question – what methods, services and processes are needed in an AI testbed to rapidly model and evaluate the effectiveness of various types AI in training?