At I/ITSEC 2019, the authors presented a fully-automated workflow to segment photogrammetric 3D point clouds/meshes and extract object information, including individual tree locations and ground materials (Chen et al. 2019). The ultimate goal is to create realistic virtual environments and provide the necessary information for simulation. The generalizability of the previously proposed framework was tested using a database that was created under the Army’s One World Terrain (OWT) project with a variety of landscapes (i.e., various buildings styles, types of vegetation, and urban density) and different data qualities (i.e., flight altitudes and overlap between images). Although the database is considerably larger than existing databases, it remains unknown whether deep learning algorithms have truly achieved their full potential in terms of accuracy, as sizable data sets for training and validation are currently lacking. Obtaining large annotated 3D point cloud and 2D image databases are time-consuming and labor-intensive not only from a data annotation perspective in which the data must be manually labeled by well-trained personnel but also from a raw data collection and processing perspective. Furthermore, it is generally difficult for segmentation models to differentiate objects, such as buildings and tree masses, and these types of scenarios do not always exist in the collected data set. Thus, the objective of this study is to investigate the possibility of using synthetic photogrammetric data to substitute for real-world data in training deep learning algorithms. The author has investigated methods for generating synthetic UAV-based photogrammetric data to provide a sufficiently sized database for training a deep learning algorithm with the ability to enlarge the data size for scenarios in which deep learning models have difficulties.