VLSI and ULSI technologies can support parallel, information-processing methods that only a few years back were models that generated interest among a handful of theoretical mathematicians, physicists, neurophysiologists, cyberneticists, and science-fiction writers.
Increasing performance simply by adding function-producing parts is so fundamental in nature and in human endeavors that it inevitably had to be addressed in computing applications. When specialized as well as replicative functions are performed in parallel, the result is a distributed process. Applying this principle to computing, where system parts are called modules, facilitates solving very complex problems using a divide-and-conquer method. An apparently monolithic entity like a flight simulator, for instance, can be partitioned into small, discrete, manageable modules. All distributed systems are limited by the size of the entire system and also by that of their individual components. Functional complexity is limited by the system's ability to communicate timely information between the various parts. Because of the disparity in the number of specialized modules, the primary difference between functionally distributed General Aviation Trainers and large-scale WST's is in their information-flow requirements. The factors that constrain the system are computer buses, bus networks, and I/O paths.
This paper, which examines the conceptual and philosophical approach that led to the first generation of modular, functionally-distributed, microprocessor-based simulators, deals primarily with technical, computational, and motivational perspectives and with general systems solutions that have been formulated over the last eight years. It gives dimension to, and lends insight into, the reasons for taking the critical, evolutionary leap that is necessary for economic and technological progress.