Many in the simulation industry believe the only way to assure correlation across multiple simulation training domains (e.g., space to urban) is to structure highly the source data used to build those training scenes. This paper will talk about the use of new real-time concepts to attain correlation across multiple simulation domains by directly using source data in its native format found scattered on your disk or network, instead of the elaborate, and often restrictive "common/master" formats being proposed worldwide.
The paper will point out in parallel new concepts of implementing and conceptualizing the multiple domain correlation problems by employing real-time construction techniques to obtain the next level of multiple application reuse, scalability, and correlation.
Important to the paper will be the discussion that source data alone, no matter how many attributes or how elaborately stored, is not enough to assure a multiple-domain, or even a multiple-system, correlated representation, with efficient use of resources, and proper representation in various systems. Because many of the decisions (that really are better determined in real-time) are statically compiled into the 3D terrains and cannot be accounted for by selecting a different level-of-detail (LOD) from a large selection of pre-built static LODs. An example of an important dynamic variable is screen resolution, which is used in the calculations to determine the number of LODs and the ranges at which 3D object, terrain, and imagery transition in/out of the 3D scene, so they do not create negative training by distracting the student by popping in/out. Discover how new concepts fueled by multiple CPUs/GPUs come together to create 3D scenes completely on the fly for multiple domains and how building terrain completely on the fly opens up the doors for new opportunities in cost savings, capabilities, and scene fidelity across multiple domains at the same time.