There are two main issues that must be addressed when building a composable simulation system. The most obvious is identifying the functional granularity of the composition. The choice for granularity defines the modules of the systems and therefore their building blocks. Compatibility across the functional composition boundaries can best be thought of as syntactical consistency (e.g., when two systems can exchange the agreed upon data in a clear and unambiguous manner). The semantic granularity of the system, the second and initially often overlooked issue, is where things like thoughts, concepts, and level of interactions begin to separate the system components. These divisions go a step further to start to define groups of modules that "make sense" together. Even though all the components might be able to exchange data with each other, the key is to exchange meaningful information in a flexible and timely manner. It is this capability the leads to a truly composable system.
There are similar architecture-related discussions concerning the computational and communication model of the systems and the target languages to be used. In the ideal world, none of these other issues should affect the design of the system. In practice, however, these issues are often the overriding determinants. The implementation, communication, and computational models tend to limit the resource allocation available to the software components. These limitations feedback to the designers and further constrain the architecture.
This paper will examine composability from an architectural perspective with Computer Generated Forces (CGF) as the target domain. We discuss an inversion of the traditional approach of first bounding the problem by decomposing the documented requirements, designing an architecture based upon required model functionality, and then negotiating interfaces based on model algorithmic needs. While this approach has worked in the past, very often after the system is operational, a new model requirement is added to the system that causes a violation to architecture precepts. Over time, this has caused many a system to become bloated and brittle. Instead, we start by stepping back from any specific model algorithmic requirements. This allows us to develop a component architecture that characterizes the information flows between notional categories of system components and not the specific implementation nor the functionality of the modules of a component. Some of these information flows are time-critical and high bandwidth, while others are broadcast, low bandwidth, and not time-critical. By building a system that supports the necessary types of data flows between categories of modules, a generalized interconnection context is developed. Furthermore, since the various model components are based upon the data flows rather than specific algorithmic centric views, we can compose and extend them as needed by the specific system application.