Networked simulation development continues its rapid progress, but automated human performance assessment development has been almost completely neglected. Typically, subject matter expert opinions and surveys are used to assess human performance. While subjective ratings for complex simulation environments provide a valuable assessment for overall performance, a comprehensive effectiveness evaluation should include objective measures for both in-simulator and transfer to actual environment assessments. Furthermore, the individual assessment of numerous skills can exceed the attentional resources a subject matter expert has to offer. Thus, an automated objective skill measurement system is required to properly evaluate the training effectiveness of networked simulations. For example, if one wanted to track the amount of time an aircraft has spent within different "range rings" to a threat, an automated tool can quickly and precisely track this information by "listening" to network traffic and calculating distances from the positional information provided by each entity. An automated performance assessment tool could go beyond simple measures such as kill ratios and weapon hit ratios; tracking hundreds of variables, thereby providing objective assessments for individual and team skills quickly and accurately. This paper presents a general methodology for capturing automated objective assessments from a networked simulation environment. Research in this area reveals that, although many objective assessments can be made today, additions to the current DIS/HLA protocol standards would increase the number of opportunities and methodologies used to measure individual and team performance. This paper presents results showing that human performance assessments for networked simulation is possible and advocates for new requirements to enable a standardized, extensive suite of measures. These measures will allow researchers to quantify the amount of learning that has taken place in a networked simulation environment, in terms of both outcome and process measures, and the standardization enables cross-comparison of effectiveness results.