Due to demand for realism and high-fidelity experiences by today’s users, rendering virtual environments proves to
be a computational challenge for lightweight computing platforms (e.g. mobile devices). Traditional simulated
environments typically use as much processing power as available to render the entire scene in high detail, limiting
simulations to higher end computers. One approach to optimize processing power for three dimensional models is to
use varying, decreased, level of detail (LOD) for distant representations (Sik & Pattanaik, 2011). This research
attempts to future optimize resources by expanding the adaptive LOD approach based on the object’s location in the
field of view (FOV) in addition to the object’s distance. Such FOV adaptation would take advantage of state-of-theart
head and gaze tracking capabilities. This paper presents results from an initial investigation focused on identifying
the minimal LOD that objects can be reduced to before the they become unrecognizable. A simulation was designed
that presented randomized sets of objects of various LOD. Subjects were asked to choose an object from the group
based on an on-screen prompt. The speed and accuracy of each subject’s response was recorded to determine the
LOD at which there was no difference in recognition from the full-detail objects. The researchers concluded that the
minimum required LOD for recognition without sacrificing speed or accuracy lies between 20% and 80% depending
on the shape and distinct features of each object. Specific levels of detail were determined for six objects of different
feature complexity to be used in further research studies.