Tests, simulations, and other modes of assessment are often used to capture large amounts of data from learners regarding the knowledge and skills that were trained. The data can come in various formats and from various assessment types such as knowledge checks, behavioral checklists, or rating scales. There is a wide range of tools that can be used to analyze the data captured from these assessments, each of which has their own assumptions, benefits, and costs. The use of appropriate analytical tools on these data becomes key to making accurate, data-driven decisions about learners. The purpose of this paper is to provide an overview of an analytical approach called Item Response Theory (IRT; Lord & Novick, 1968) that can estimate learner proficiency; discuss its benefits compared to traditional approaches, and; describe how it can be used with an interoperable data format called the Experience Application Program?Interface?(xAPI;?HT2Labs, 2019). We also provide practical guidance on how the reader can begin incorporating IRT-based analyses into their own efforts. All software code that was used in developing this paper is freely available from the first author upon request.