Using Neural Architectures to Model Complex Dynamical Systems

Advances in Artificial Intelligence and Machine Learning

The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable success in leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.

Nicholas Gabriel, Neil F. Johnson

View article >>

Machine Learning Reveals Adaptive COVID-19 Narratives in Online Anti-Vaccination Network

Proceedings of the 2021 Conference of The Computational Social Science Society of the Americas

The COVID-19 pandemic sparked an online “infodemic” of potentially dangerous misinformation. We use machine learning to quantify COVID-19 content from opponents of establishment health guidance, in particular vaccination. We quantify this content in two different ways: number of topics and evolution of keywords. We find that, even in the early stages of the pandemic, the anti-vaccination community had the infrastructure to more effectively garner support than their pro-vaccination counterparts by exhibiting a broader array of discussion topics. This provided an advantage in terms of attracting new users seeking COVID-19 guidance online. We also find that our machine learning framework can pick up on the adaptive nature of discussions within the anti-vaccination community, tracking distrust of authorities, opposition to lockdown orders, and an interest in early vaccine trials. Our approach is scalable and hence tackles the urgent problem facing social media platforms of having to analyze huge volumes of online health misinformation. With vaccine booster shots being approved and vaccination rates stagnating, such an automated approach is key in understanding how to combat the misinformation that slows the eradication of the pandemic.

Richard Sear, Rhys Leahy, Nicholas Johnson Restrepo, Yonatan Lupu, Neil Johnson

View article >>

Dynamic Latent Dirichlet Allocation Tracks Evolution of Online Hate Topics

Advances in Artificial Intelligence and Machine Learning

Not only can online hate content spread easily between social media platforms, but its focus can also evolve over time. Machine learning and other artificial intelligence (AI) tools could play a key role in helping human moderators understand how such hate topics are evolving online. Latent Dirichlet Allocation (LDA) has been shown to be able to identify hate topics from a corpus of text associated with online communities that promote hate. However, applying LDA to each day’s data is impractical since the inferred topic list from the optimization can change abruptly from day to day, even though the underlying text and hence topics do not typically change this quickly. Hence, LDA is not well suited to capture the way in which hate topics evolve and morph. Here we solve this problem by showing that a dynamic version of LDA can help capture this evolution of topics surrounding online hate. Specifically, we show how standard and dynamical LDA models can be used in conjunction to analyze the topics over time emerging from extremist communities across multiple moderated and unmoderated social media platforms. Our dataset comprises material that we have gathered from hate-related communities on Facebook, Telegram, and Gab during the time period January-April 2021. We demonstrate the ability of dynamic LDA to shed light on how hate groups use different platforms in order to propagate their cause and interests across the online multiverse of social media platforms.

Richard Sear, Rhys Leahy, Nicholas Johnson Restrepo, Yonatan Lupu, Neil F. Johnson

View article >>

How Social Media Machinery Pulled Mainstream Parenting Communities Closer to Extremes and Their Misinformation During Covid-19

IEEE

We reveal hidden social media machinery that has allowed misinformation to thrive among mainstream users, but which is missing from current policy discussions. Specifically, we show how mainstream parenting communities on Facebook have been subject to a powerful, two-pronged misinformation machinery during the pandemic, that has pulled them closer to extreme communities and their misinformation. The first prong involves a strengthening of the bond between mainstream parenting communities and pre-Covid conspiracy theory communities that promote misinformation about climate change, fluoride, chemtrails and 5G. Alternative health communities have acted as the critical conduits. The second prong features an adjacent core of tightly bonded, yet largely under-the-radar, anti-vaccination communities that continually supplied Covid-19 and vaccine misinformation to the mainstream parenting communities. Our findings show why Facebook’s own efforts to post reliable information about vaccines and Covid-19 have not been efficient; why targeting the largest communities does not work; and how this machinery could generate new pieces of misinformation perpetually. We provide a simple yet exactly solvable mathematical theory for the system’s dynamics. It predicts a new strategy for controlling mainstream community tipping points. Our conclusions should be applicable to any social media platform with in-built community features, and open up a new engineering approach to addressing online misinformation and other harms at scale.

Nicholas J. Restrepo, Lucia Illari, Rhys Leahy, Richard Sear, Yonatan Lupu, Neil F. Johnson

View article >>

Machine Learning Language Models: Achilles Heel for Social Media Platforms and a Possible Solution

Advances in Artificial Intelligence and Machine Learning

Any uptick in new misinformation that casts doubt on COVID-19 mitigation strategies, such as vaccine boosters and masks, could reverse society’s recovery from the pandemic both nationally and globally. This study demonstrates how machine learning language models can automatically generate new COVID-19 and vaccine misinformation that appears fresh and realistic (i.e. human-generated) even to subject matter experts. The study uses the latest version of the GPT model that is public and freely available, GPT-2, and inputs publicly available text collected from social media communities that are known for their high levels of health misinformation. The same team of subject matter experts that classified the original social media data used as input, are then asked to categorize the GPT-2 output without knowing about its automated origin. None of them successfully identified all the synthetic text strings as being a product of the machine model. This presents a clear warning for social media platforms: an unlimited volume of fresh and seemingly human-produced misinformation can be created perpetually on social media using current, off-the-shelf machine learning algorithms that run continually. We then offer a solution: a statistical approach that detects differences in the dynamics of this output as compared to typical human behavior.

Richard Sear, Rhys Leahy, Nicholas Johnson Restrepo, Yonatan Lupu, Neil F. Johnson

View article >>

New Math to Manage Online Misinformation

SIAM News Blogs

Social media continues to amplify the spread of misinformation and other malicious material. Even before the COVID-19 pandemic, a significant amount of misinformation circulated every day on topics like vaccines, the U.S. elections, and the U.K. Brexit vote. Researchers have linked the rise in online hate and extremist narratives to real-world attacks, youth suicides, and mass shootings such as the 2019 mosque attacks in Christchurch, New Zealand. The ongoing pandemic added to this tumultuous online battlefield with misinformation about COVID-19 remedies and vaccines. Misinformation about the origin of COVID-19 has also resulted in real-world attacks against members of the Asian community. In addition, news stories frequently describe how social media misinformation negatively impacts the lives of politicians, celebrities, athletes, and members of the public.

Neil F. Johnson

View article >>

A Public Health Research Agenda for Managing Infodemics: Methods and Results of the First WHO Infodemiology Conference

JMIR Infodemiology

An infodemic is an overflow of information of varying quality that surges across digital and physical environments during an acute public health event. It leads to confusion, risk-taking, and behaviors that can harm health and lead to erosion of trust in health authorities and public health responses. Owing to the global scale and high stakes of the health emergency, responding to the infodemic related to the pandemic is particularly urgent. Building on diverse research disciplines and expanding the discipline of infodemiology, more evidence-based interventions are needed to design infodemic management interventions and tools and implement them by health emergency responders.

Calleja et al.

View article >>

Online hate network spreads malicious COVID-19 content outside the control of individual social media platforms

Scientific Reports

We show that malicious COVID-19 content, including racism, disinformation, and misinformation, exploits the multiverse of online hate to spread quickly beyond the control of any individual social media platform. We provide a first mapping of the online hate network across six major social media platforms. We demonstrate how malicious content can travel across this network in ways that subvert platform moderation efforts. Machine learning topic analysis shows quantitatively how online hate communities are sharpening COVID-19 as a weapon, with topics evolving rapidly and content becoming increasingly coherent. Based on mathematical modeling, we provide predictions of how changes to content moderation policies can slow the spread of malicious content.

N. Velásquez, R. Leahy, N. Johnson Restrepo, Y. Lupu, R. Sear, N. Gabriel, O. K. Jha, B. Goldberg, N. F. Johnson

View article >>

Hidden order across online extremist movements can be disrupted by nudging collective chemistry

Scientific Reports

Disrupting the emergence and evolution of potentially violent online extremist movements is a crucial challenge. Extremism research has analyzed such movements in detail, focusing on individual- and movement-level characteristics. But are there system-level commonalities in the ways these movements emerge and grow? Here we compare the growth of the Boogaloos, a new and increasingly prominent U.S. extremist movement, to the growth of online support for ISIS, a militant, terrorist organization based in the Middle East that follows a radical version of Islam. We show that the early dynamics of these two online movements follow the same mathematical order despite their stark ideological, geographical, and cultural differences. The evolution of both movements, across scales, follows a single shockwave equation that accounts for heterogeneity in online interactions. These scientific properties suggest specific policies to address online extremism and radicalization. We show how actions by social media platforms could disrupt the onset and ‘flatten the curve’ of such online extremism by nudging its collective chemistry. Our results provide a system-level understanding of the emergence of extremist movements that yields fresh insight into their evolution and possible interventions to limit their growth.

N. Velásquez, P. Manrique, R. Sear, R. Leahy, N. Johnson Restrepo, L. Illari, Y. Lupu, N. F. Johnson

View article >>

A computational science approach to understanding human conflict

Journal of Computational Science

We discuss how computational data science and agent-based modeling, are shedding new light on the age-old issue of human conflict. While social science approaches focus on individual cases, the recent proliferation of empirical data and complex systems thinking has opened up a computational approach based on identifying common statistical patterns and building generative but minimal agent-based models. We discuss a reconciliation for various disparate claims and results in the literature that stand in the way of a unified description and understanding of human wars and conflicts. We also discuss the unified interpretation of the origin of these power-law deviations in terms of dynamical processes. These findings show that a unified computational science framework can be used to understand and quantitatively describe collective human conflict.

D. Dylan Johnson Restrepo, Michael Spagat, Stijnvan Weezel, Minzhang Zheng, Neil F. Johnson

View article >>