The Night Vision and Electronic Sensors Directorate (NVESD) develops sophisticated, trailer-based, deployable surveillance sensors and operator stations used to monitor areas near sensitive and hostile work locations. Effective training for system operators is limited due to the expense of providing training targets in the live environment. In collaboration with NVESD, Lockheed Martin has developed a prototype embedded Augmented Reality training capability that enables the injection of realistic virtual training targets into the sensor displays. The system intercepts the sensor data prior to the sensor display computer, inserts virtual entities and effects, and forwards the results to the sensor display. The pros and cons of various insertion points and the challenges encountered when trying to synchronize video images with the sensor meta-data are examined. The system integrates the Night Vision Image Generator (NVIG) technology developed at NVESD into the rendering system to ensure virtual renderings match multiple sensor types, and sensor data along with instructor controls are used to blend entities with different environmental conditions. Finally, an innovative approach was developed to detect and mask video overlays that allowed virtual insertions to appear to be behind the overlays. The resulting prototype provides robust training capability with dynamic virtual insertions at a fraction of the cost of live training scenarios