The research highlights how the latest large language models (LLM) can transform a corpus of technical knowledge into a formal ontology, then score the output as a customized expert system. Traditional ontology-builders support expert systems that humans can audit and explain the rules for trust and confidence predictions. A notable shortcoming of building ontologies from scratch involves the initial knowledge transfer from experts and curating the valuable generation of conditional rules and decision trees. The research experimentally examines whether converting unstructured data into structured candidates could accelerate rule extraction beyond just entity extraction for people, persons, organizations, or intent prediction for motivations or risk management. Our work highlights the historically significant DoD challenges first spearheaded by investments in massive symbolic artificial intelligence projects like Cyc (similar to enCYClopedia). The original ontology commitment of labor alone exceeded 1000 to 3000 person-years of effort "to describe how the world works." The latest LLMs (such as OpenAI's GPT-3, Google's PaLM, etc.) typically encode 40 terabytes of the world's (internet) knowledge and provide convenient question-and-answer APIs that can export ontology-ready rules and semantic relationships to support deterministic expert systems. The research examines the experimental scalability of this approach for two examples taken from classic military training problems: 1) how to build verifiable medical diagnostics and decision trees for supporting field doctors; 2) how to build advanced decision aids for fusing situational and threat awareness and commander's dashboards. We evaluate these case studies for the LLMs and human questioners who extract artifacts for building expert systems. This process potentially solves the lack of reliability from existing LLMs and human feedback to solve the otherwise intractable needs for predictable medical or combat decision-makers. If the LLMs distill the world knowledge, then human inquisitors distill the expert artifacts in reliable and testable ways that remove indeterminism from existing dialog generators.
Keywords
AI, CONCEPTUAL MODELING, CONTENT GENERATION, KNOWLEDGE COMPONENTS, NATURAL LANGUAGE PROCESSING
Additional Keywords