Historically, CGF systems have made runtime decisions via the use of prescriptive mechanisms such as Finite State Machines (FSMs) and Rule Based Systems (RBS). The FSM and RBS mechanisms are the result of a complex and time-consuming process of Knowledge Acquisition and Knowledge Engineering (KA/KE). The artifacts of the KA/KE process are then turned over to the programmers to implement. The result is often a large, complex, and brittle set of hard coded behaviors. The advantage to these approaches is that the entities execute the preprogrammed behaviors faithfully and fairly efficiently. The downside is that it is often quite difficult to modify the behaviors to account for new events, stimuli, or situations. To address these issues we have looked at the realm of machine learning, specifically the use of Evolutionary Algorithms (EA), to make decisions that have historically been hard coded in the FSM or RBS constructs. The use of EA is not new to the Computer Generated Forces (CGF) community. However, the vast preponderance of its use has been in a priori offline runs to develop a rule base, plan of attack, or path. This has largely been due to the computational costs of using EA. While the increase in processing speed has not made performance considerations irrelevant, they have fundamentally changed the dynamics of the development vs. runtime cost equations. It is with this in mind that we chose to investigate the use of EA to make selected decisions at runtime. Specifically, we developed a proof of principle system to select the engagement rules and target priorities for a tank platoon in a given tactical situation. Rather than having the prescriptive determination of the engagement process, the EA subsystem randomly generates a set of shooter / target pairings and weapon selection. The EA subsystem evaluates a set of possible engagements using a polynomial function comprised of the proximity and obscuration of the entities, supporting fires, and lethality. The highest rated engagements, and newly generated modifications of them, are carried forward to the next generation by the EA subsystem. The process of the evaluation of a best engagement scenario is then repeated for a given number of generations. At the end of several generations, or when a figure of merit is reached, the best engagement scenario is chosen as the course of action. The main advantage to this approach is a relatively small amount of code needed to encode the EA mechanisms and the evaluation function which can be easily changed to account for new weighting of factors. Thus, a whole series of target selections can be made with a relatively compact flexible code base. This paper covers the development of the proof of principle system and the results from the test runs. Specifically, we focus on three factors: the number of engagement scenarios created per generation, the number of generations, and the evaluation function. Through the interaction of these three factors, we show how the engagement scenarios evolved to suit the tactical scenario. Key among the considerations is the time it takes for the system to come up with a viable target list. From these results, we make extrapolations to where it is appropriate to use EA as a means of developmental cost reduction and code simplification. This is the second in a series of papers that addresses the use of EA in real-time simulation systems. The first paper focused on the ability to change formations based upon the detection of a threat.
An Application of Real Time Evolutionary Algorithms
1 Views