Distributed Mission Operations (DMO) training frequently relies on simulation to accomplish training objectives. Fidelity, in its broader definition and interpretation, encompasses both physical attributes (e.g., ergonomics, switches, symbols, etc.) and functional attributes (e.g., dynamics, models, exercising appropriate cognitive skills, etc.). On a continuum of possible "fidelity levels" what degree of physical and functional fidelity, however, constitutes "high fidelity" or "low fidelity?" What standard is employed and measured against to assign such labels? And, most importantly, what training trade-offs exist when sacrificing higher fidelity for lower cost? That is, in efforts to lower costs (and therefore fidelity), what training experiences are most sacrificed and how is that documented? In this paper we outline a method for evaluating simulation fidelity based upon a comprehensive list of warfighter-defined experiences critical to performing his/her job--that is, a proposed approach upon which simulation systems could be judged and compared. This proposed warfighter-centric approach leverages two credible processes/products already in existence, the Mission Essential Competencies (MECs) and Dash One Emergency Procedures (EPs). During the MEC process, operational warfighters determine the critical list of mission experiences necessary to be fully prepared for combat, while the Dash One lists the critical EPs with which a warfighter must be familiar. Leveraging and combining these products provides us with a warfighter-anchored approach for what constitutes critical simulator system evaluation points. Using this method, we also report the trade-off results between a mature "high-fidelity" and an early deployable "low-fidelity" F-16 four-ship DMO environment. We discuss how the fidelity levels clearly impact each system's ability to provide training on various tactical and EP experiences.