The Veteran Benefits Administration implemented a new requirement to validate the effectiveness of new or revised e-learning courseware after deploying it to the field using the total population as samples of the target audience. In the past, the validation effort occurred in a controlled environment using a small sample of the population prior to fielding the course. The methodologies used with the small sample population were the U.S. Army’s Sequential Validation or Fixed Validation. Because of the push to deploy the required entry-level and refresher/recurring training quicker (as well as cheaper without sacrificing quality), course effectiveness validation is now conducted post-deployment. This mandate poses several challenges, one of which is how to determine whether a training course is effective when it is deployed to the field and completed by government employees expected to simultaneously meet their fast-paced daily production requirements in a high-stress work environment. This paper reports how an argument-based approach is being assessed as an alternative courseware validation process that provides practical evidence to allow reasoned, data-driven interpretations and conclusions regarding the effectiveness of a deployed course. The approach uses both qualitative and quantitative data to establish reasoned arguments to make the evidence-based interpretations of the data. This paper discusses how this argument-based framework for measuring, analyzing, and reporting validation results is evolving to make reasoned determinations about the effectiveness of deployed e-learning products conducted in uncontrolled work environments.
Measuring a Moving Target: Validating Deployed Training Courses
6 Views