Machine Learning models are vulnerable to a variety of attacks above and beyond the range of conventional cyber and human social-engineering hacks, such as data poisoning or AI Trojans inserted during the training phase. AI/ML systems are also brittle and easy to confuse in the inference phase: in a military context, parked aircraft with a certain sticker applied to the fuselage might be miscategorized by an aided target recognition system as not-aircraft, or a tank camouflaged with enough foliage might be considered a moving tree. This study assessed the current state of counter-AI and counter-counter-AI programs and research, in DOD, in the Intelligence Community, in industry and academia, and provides recommendations for the Army to improve how it tests and evaluates, validates, and protects existing and future AI/ML models and the data supply chain, and to improve detection, reaction, and restoration of AI/ML-enabled systems after an attack – AI assurance.