The Data Distribution Management (DDM) service provided through the High Level Architecture (HLA) is an extremely powerful data filtration tool. In its simplest incarnations, it can reduce the amount of data a federate must process. With proper design, the service can perform federate functionality architecturally. However, extending a federation's capabilities to use DDM can add layers of conceptual and technical challenges, such as scenario planning, network configuration, and more complicated software implementation. The Army Capabilities Integration Center (ARCIC)-led OmniFusion and NetBCT 2009 experiments used DDM to support an ambitious set of requirements, including modeling large entity counts in a heterogeneous distributed simulation environment. While these experiments only scratched the surface of DDM's full potential, integration testing revealed a wealth of lessons learned that could benefit anyone interested in developing DDM functionality or planning a federation-wide DDM strategy. This paper details these lessons learned and explains them in a way that makes the material approachable for DDM beginners and veterans alike. As a starting point, the paper discusses basic DDM concepts, uses, and Application Programming Interfaces (APIs). However, the paper focuses primarily on more advanced topics, such as ways to design an effective DDM strategy, solutions to specific implementation issues, troubleshooting techniques, and options to improve run-time efficiency. It also examines the cultural issues ARCIC encountered while implementing DDM as well as reasons why DDM is not always a viable solution.
DDM Explained: Lessons for Data Distribution Management Developers and Strategists
4 Views