The USMC is committed to putting time and capital into developing autonomous systems that will aid its Marines. However, autonomous systems are only useful when they are used, and a large determinant of use is trust. In many cases, systems go unused due to the human's skepticism regarding its trustworthiness. As machines transition from tele-operated towards partially or fully autonomous; the capabilities, limitations, and reasoning behaviors of the machines will further mystify users and inhibit trust. Experience and continued use with automation can facilitate the development of trust, but the complexities, maintenance, and cost of future machines create an environment that is prohibitive to daily real-world training with autonomous systems. These two factors, (a) an inability to understand artificial intelligence (AI) and (b) an inability to train daily, contribute to an atmosphere of mistrust in valuable systems – systems designed to aid the warfighter in mission success. The current research explores how to develop trust in autonomous systems while not being able to regularly train with them. The aim is to research how trust is developed and transferred from a virtual environment to live execution. Autonomy will consist of AI agents perceived to be created by either automatic or interactive Machine Learning (ML) techniques. It is predicted that a virtual gaming environment that enables interactive ML (iML) of the autonomous system will facilitate the development of trust in the system for live execution. This paper will include objective and subjective data from field experiments conducted with Infantry Marines at Camp Lejeune, NC; and will focus on applicable gaming environments for iML to facilitate the development of trust in autonomous systems. This research directly supports the Commandant’s vision and US Army desires to increase the use of unmanned lethal and non-lethal systems in operations.