A central question for social interaction is to recognize the circumstances under which exchange of information will
lead to the spread of misinformation (incorrect information) and how misinformation spread can be stopped. What is
unclear is the importance of variables within networks in curtailing the spread of misinformation. Specifically, if we
were trying to stop the spread of misinformation within certain network types (i.e. clustered, small world, scale-free)
what network elements should we consider most important, given that we may not know where the misinformation
is arising from? We pose this research question: what are the relationships between network types and
misinformation spread inventions types? Using simulated models we find that only in the small world network
setting do we see a statistical difference in the misinformation spread rate among the four intervention types
(random placement, and targeting based on degree centrality, betweenness centrality and closeness centrality). We
also find that the misinformation spread rate for the three network settings is different only in the case of the
closeness centrality targeted intervention type and not in the others types. Next, we apply this model to a virtual
world training scenario under which basic social network principles are taught to help soldiers recognize how to
infiltrate networks that may cause misinformation spread.