Much work has been done in the modeling and simulation community to capture expert knowledge as human behavior representations (HBRs) in developing behavior models for synthetic forces. A pervasive problem in the use of synthetic forces by the modeling and simulation community is their brittleness when exposed to massively complex simulation systems. Although synthetic-force behavior models may accurately encode a great deal of expert knowledge, the lack of broader commonsense knowledge frequently leads to suboptimal performance. When used for training purposes, these failures can lead to negative training and re-engineering downtime. When deployed in real-world situations, such as Uninhabited Aerial Vehicle (UAV) control, these failures can have far more serious and costly consequences.
This paper describes a methodology for imbuing synthetic forces with a capacity for commonsense reasoning through diagnosis of error symptoms detected during continuous self-monitoring. This methodology is part of the broader 'Recourse' architecture of robustness in behavior modeling. Behavior models are instrumented with a self-monitoring capability by using qualitative reasoning-based tests of domain-specific parameters. Failed tests are categorized as potential symptoms which can be diagnosed to suggest atomic effector-based recovery actions. This paper also describes some of the trade-offs and issues encountered during the design of the methodology and provides insight into how the methodology could be applied to behavior models in general. A prototype implementation of this methodology was applied to TacAir-Soar, a very large rule-based system that produces cognitively plausible behaviors of military aviators.