Creating a high fidelity virtual environment for flight simulators can be both expensive and labor intensive. Aircraft simulators rely heavily on geo-specific imagery to provide the trainee with a high fidelity virtual environment which is rich in two-dimensional (2D) visual cues. It is well known that the addition of three-dimensional (3D) models to this synthetic environment enhances visual cues that enable the perception of depth and motion. Alignment of 3D features with the underlying imagery is crucial to avoid visual distractions, especially at low altitudes. Constraints in hardware performance and financial budget limit the amount and quality of 3D features that can be included in the virtual environment. This paper presents an innovative automated process that leverages commonly available geospatial data sources and graphics processing unit (GPU) technologies to enhance visual cues. The design goal is to create a framework that is largely automated and able to process a wide variety of geospatial datasets including imagery, material classification, digital elevation and vector data. The result of our work is twofold. First, a generic framework for extraction and processing height and feature information from commonly available geospatial data was developed. Secondly, a run-time rendering component that is independent of the image generator was developed. This run-time component is designed to be a drop-in module that uses the power of modern commercial off-the-shelf GPUs and OpenGL 4.0 to achieve consistent rendering performance, regardless of how many features are visible in the scene. Furthermore, it is expected that this run-time technique can be used to enhance feature content in existing terrain databases and bypass current image generator system bandwidth limitations. This novel work flow and rendering approach has the potential to raise the bar for high complexity and high fidelity virtual environments for real-time training simulators while lowering overall database acquisition cost.