For gaming, training, and video directing, the high cost and intensive labor requirements limit the transformation of the initial abstract concept to the final product either as a film or as interactive software. This paper investigates the steps required to translate rough instructions into instructional materials. Where possible, we insert machine learning to automate those transformative steps from script to movie. The DoD repositories of archived training manuals, reports, and lessons learned include a vast and varied mixture of step-wise instructions and industrial-style line drawings. We model the AI input as a human-machine interchange, augmenting the role of human auteur with an automated associate director. The example training builds on a long military tradition in fielding portable kits (e.g., mess kit or first-aid bag), but augmented in our case by the modern AI tools to script a set of original instructions using natural language processing. We apply custom text-to-speech tools to dub audio tracks and narrate the required assembly steps. Using the visually creative elements of generative adversarial networks, we complete the required visual representations from the initial script as single images, then animate reels and entire vignettes for video story-telling. To evaluate the output quality, we build a complex multi-stage set of instructions for the use case of fielding an optical microscope with both virtual modeling and real test scenarios. The original training material walks a soldier or medic through how to upgrade from a Vietnam-era field microscope (Model 3050) to the Lego Microscope as first developed by UC San Francisco but later enhanced by IBM and biophysicists at the Universities of Göttingen. We chose this case study to compare an AI-inspired modular approach and to complete complex human tasks that offered real-world building blocks (Legos) with readily available designs.
Keywords
AUTHORING TOOLS,AUTOMATION,CONTENT GENERATION,EMERGING TECHNOLOGIES
Additional Keywords