Natural language increases the flexibility to communicate with computers due to the inherent efficiency of the human half of the interface. Specifically, a natural language interface for processing speech provides an efficient means of communication if a human user's eyes and hands are occupied. Human-machine dialogue, whether typed or spoken, allows for humans to communicate or interact effectively with computer systems that are becoming increasingly complex to use because of their capabilities. For these reasons, a user interface that can process natural language has the potential for simplifying an overly complex and unfriendly working environment.
Natural Language Processing (NLP) effectively builds meaningful sentences from the basic semantic language building blocks -noun and verb phrases [1]. Further, semantic and pragmatic context of a given group of messages can be restricted in terms of specific domain information relating to events. This contextual information is then analyzed in terms of the restricted set of possible meanings the sentence may have within the given situation. Current speech recognition systems rely on a constrained syntax whereas the context based NLP relinquishes the bounding syntax. Combining a highly accurate speech recognizer and natural language processing harnesses the capabilities of both components. This paper discusses the experimentation and integration of speech recognition and natural language components leading to real-time, continuous, context correct and unconstrained speech recognition for training applications.