It is widely accepted that the difficulty and expense involved in acquiring the knowledge behind tactical behaviors has been one limiting factor in the development of simulated agents representing adversaries and teammates in military and game simulations. Several researchers have addressed this problem with varying degrees of success. The problem mostly lies in the fact that tactical knowledge is difficult to elicit and represent through interactive sessions between the model developer and the subject matter expert. This paper describes a machine learning approach to evolve tactical agents based upon automatic observation of a human performing a mission. The approach employs Genetic Programming in conjunction with Context-based Reasoning and is called Genetic Context Learning (GenCL). Prior research collected data from a simulator and built a prototype that showed the feasibility of the approach. Conversely, this research collected data from live exercises when two tank platoons were engage in a combat. The GenCL algorithm learned the behavior of the tanks and incorporated the evolved models into a simulator, aimed to be used in a support system for exercise evaluations. When the data collection task transfer from simulated environment to real world problems the degree of freedom dramatically increases. It opens new questions and future research areas. However, two major contributions were accomplished with this work: 1) The GenCL algorithm is able to automatically create human behavior models from pre-processed data collected from real exercises. 2) The results indicate that humans inhibit behavior patterns that can be modeled in a contextual manner.