Acquiring real life training data for the purposes of object identification and training on the battlefield is both costly and a time-consuming task requiring human intervention. We lay the hypothetical foundation of rapidly developing Artificial Intelligence (AI) object recognition models based solely on available 3-Dimensional (3D) models and synthetic images to provide rapid and accurate battlefield object detection.
Several studies have shown the efficacy of leveraging 3D models for the purposes of object identification in AI model training when utilized in conjunction with real life imagery. We expand upon these studies and venture into the possibilities of solely leveraging 3D models and synthetic images to train AI models for object recognition on the battlefield. Successful object detection and classification using AI algorithms is highly dependent on availability of training data (e.g. labelled images). Although large repositories of labelled images exist and continue to be generated for research purposes, most labelling is generalized to object types and not to the level of specificity that would produce useful object detection algorithms for battlefield applications. Real-world training imagery is scarce, and labelling is a time-intensive human-in-the-loop event. In practice, thousands of images of two similar, but distinct items of interest (e.g. M1A1 Abrams Tank vs. Panther Tank) are required to efficiently train an AI model to high level of confidence. We explore the practicality of the utilization of existing high-fidelity gaming objects and future digital twins for rapid, automated generation of high-volume AI model training data for expedited deployment of AI powered applications to improve battlefield situational awareness.