This year, the US Navy budget for Training and Education increased by $73.9 million to accommodate additional flight training and simulators. These simulators are essential for preparing trainees for scenarios that are rare, dangerous, complex, and expensive to stage in reality. While training simulations have historically run in costly and immobile “big box” simulators, these simulations can now be deployed using consumer-grade immersive virtual reality (VR) head-mounted displays (HMDs). For example, Navy maintenance Airmen use VR HMDs to train on the C-130 due to time and money savings over live training, without loss of training effectiveness. However, one concern when using an HMD for training is communication between the trainer and trainee. Typically, trainers observe a trainee’s progress in a simulation from a monitor that provides a window into the virtual environment. This window is missing artifacts, such as stereo depth, that may make contextualizing a trainee’s actions difficult. More recently, the method of using multiple HMDs so a trainer may be present in the environment has been introduced. Although this allows better communication between trainer and trainee, interactions between avatars may be difficult to interpret, and awareness of trainee interaction with items outside the simulation is obscured. A scoping literature review was performed to address these issues, exploring the domains of asymmetric VR, substitutional reality, and self-adaptive training systems to incorporate human trainers into the virtual scene as active participants and trainee guides. The authors evaluate current innovations in VR collaboration techniques for their impact on trainer-trainee communication in VR simulations to guide industry and interservice training professionals. Results show that for each of the current VR collaboration techniques, the trainer situational awareness benefits and deficits must be aligned to the training task.