The integration of Game Engine Technology into Flight Simulation Software has brought about a significant shift in the level of realism that can now be achieved. This is primarily due to advancements in rendering and graphics. As a result, modern pilots, trainers, and flight training schools are increasingly adopting and utilizing this hybrid technology. This can be seen in the growing demand for Extended Reality (XR) training systems. In these XR simulation software, visual fidelity is of utmost importance as it allows trainees to immerse themselves in the simulation's intricate details. As this trend continues, it becomes crucial to explore techniques for analyzing the visual elements in the scene that contribute to realism and the effectiveness of the simulation.
In this paper, we explore a specific technique that combines Feature Matching and Object Detection principles to perform a comparative analysis between a simulation based on a Game Engine and real flight visual data. Feature matching involves identifying unique keypoints or local features in each image and then finding corresponding matches between these keypoints across the images. Additionally, we employ deep learning-based object detection on the datasets to identify instances of visual objects belonging to specific classes that are of utmost importance to the flight simulation and training community. This combined approach offers a more conclusive method for assessing the mapping of features between the two datasets. We utilize both standard and feature engineered metrics to evaluate the effectiveness and success rate of feature extraction in two sets of sample data, as well as the accuracy of matching keypoints. We anticipate that this paper will serve as a foundation for further research in this field.
Keywords
DEEP LEARNING;GAME TECHNOLOGY
Additional Keywords
Computer Vision