Demands in training require organizations to maximize access to adaptive training through high and low-fidelity simulations. However, an increase of learning opportunities through simulations must also be associated with a high level of efficiency and efficacy of the training system as a whole (Atkinson & Killilea, 2015). To avoid the risk of creating a collection of practice simulators instead of objective-based training environments (Stacy, Merket, Freeman, Wiese, & Jackson, 2005), training simulators must provide a precise yet comprehensible means to express and manipulate measurements, and assessments across a range of learning opportunities (Stacy, Ayers, Freeman, & Haimson, 2006). The motivation for the work reported in this paper originated from the need to redeploy measure and assessment software from a training simulation application to another. Given our interest to repurpose investments in learning design, the main research questions the current paper seeks to address consist of determining: 1) what approach would best fit interoperable measure and assessment computations, and, 2) to what extent the selected approach is adequate to represent specific measures and assessments we had implemented in our training simulation. The first section briefly presents major interoperable assessment initiatives. The section concludes that the Human Performance Markup Language (HPML) seems to best fit our interoperable measure and assessment needs, which is to repurpose and allow interoperability of measure and assessment computations. The Human Performance Markup Language (HPML) aims at fulfilling this purpose by providing a simple and reusable way to represent the performance of individuals and teams in those systems (Walker, Tolland, & Stacy, 2015). HPML supports the representation of measurements and assessments, and how they relate to performance and learning data, as well as training objectives. In the latter case, the HPML training objective package for instance, provides a scalable formal mechanism to document and manage training objectives, their relationships to scenario conditions, and performance measures (Stacy & Freeman, 2016). The second section gives an overview of HPML, followed by a presentation of a target use case, a training simulation for novice ship conning skill acquisition. The third section discusses how some HPML assessment templates can be applied to the use case. The application of HPML to the use case indicated that most of the assessment computations that were used in the training simulation for novice ship conning skill acquisition could be represented. A possible extension to HPML for expressing otherwise cases in category selection was identified, which would simplify assessment templates. However, the sparse HPML documentation, and low number of examples available made it difficult at times to determine if the analysis of our use case respected the intention of the HPML standard proposal. In this respect, future work is needed to evaluate how expressive is HPML and what are the HPML limits and boundaries in the learning technology value chain.
Published in: I/ITSEC 2017, USA
Publisher: I/ITSEC