Given the billions of dollars invested by the Department of Defense on AI solutions for the warfighter, it is imperative industry and academia find ways for AI systems to perform within parameters, work in expected ways, and strive for improvements. Current AI constructs leave out methodologies for deep neural networks-based systems to introspect, reviewing past performance in a self-critical way to achieve next level actions. An approach is needed to create environments for AI systems to self-audit. Neural Fields (NF) represent a capacity to replicate worlds, within which AI systems can review, rehash, and grade past actions to create performance improvements. Neural Fields (NF) represent a class of deep learning techniques to create compact coordinate-based representations of complex signals gathered by conventional sensors. As an example, Neural Radiance Fields (NeRF) apply this framework to create 3D scene reconstructions by learning volumetric representations from RGB images.
Under the DoD Collaborative Combat Aircraft (CCA) effort, defense services are using RL agents to develop autonomous capabilities for air-to-air engagements. Under the DARPA Air Combat Evolution (ACE) and follow-on programs, air systems leverage AI to create air intercept logic. These programs integrate unmanned aircraft with manned aircraft in Crewed-Uncrewed Teaming (CU-T) scenarios, requiring AI solutions to work within performance parameters, function in ways pilots expect, and to self-improve. This paper examines the application of NF to mastering air combat. We propose that AI-systems can master air combat by NF-based introspection, using neural representations as a safe zone for exploratory behavior and self-evaluation of Reinforcement Learning (RL) agents. This paper explores interfaces for RL agents to NF environments, identifies ways for AI to evaluate actions, and describes methods to backpropagate reward functions without altering existing capabilities. The work presented in this paper defines how to better cultivate AI solutions to forward CCA concepts.