Deep assessment of a student’s performance on cognitive tasks requires understanding not just their actions but also
their rationales. Human instructors often attempt to gain insights into a student’s reasoning through several channels:
they observe patterns of attention or false starts; they listen to the student thinking aloud; or they ask and elicit
explicit statements of rationale. Automating such assessments has the potential to increase the effectiveness of
Intelligent Tutoring Systems (ITSs) but can be computationally challenging. There is an additional consideration
when automated tutoring happens in the context of simulation-based training, where the focus is on providing an
immersive experience that is reflective of real-world task performance. Inserting these types of assessments into a
simulation context runs the risk of negatively impacting learner immersion and engagement.
This paper presents an analysis of the trade-offs to consider in designing automated approaches to eliciting
information about student reasoning, and their impact on the development of simulation-based ITSs. There are a
variety of ways by which an ITS may ask students to state their reasoning or explain their actions explicitly in a
manner that can be automatically assessed by the tutor. However, the interface for providing student rationales and
the technology needed to assess these inputs must be carefully considered to avoid pitfalls such as giving away the
problem solution, imposing additional cognitive load, or impacting trainee engagement. This paper illustrates these
trade-offs with practical examples and describes a balanced design approach that was used successfully in an ITS for
a troubleshooting domain. The rationale elicitation technique used in this ITS has received high ratings for usability
and effectiveness in a controlled validation study.