Recent Publications

  • Nonlinear spreading behavior across multi-platform social media universe

    Chaos: An Interdisciplinary Journal of Nonlinear Science

    Understanding how harmful content (mis/disinformation, hate, etc.) manages to spread among online communities within and across social media platforms represents an urgent societal challenge. We develop a non-linear dynamical model for such viral spreading, which accounts for the fact that online communities dynamically interconnect across multiple social media platforms. Our mean-field theory (Effective Medium Theory) compares well to detailed numerical simulations and provides a specific analytic condition for the onset of outbreaks (i.e., system-wide spreading). Even if the infection rate is significantly lower than the recovery rate, it predicts system-wide spreading if online communities create links between them at high rates and the loss of such links (e.g., due to moderator pressure) is low. Policymakers should, therefore, account for these multi-community dynamics when shaping policies against system-wide spreading.

    Chenkai Xia, Neil Johnson

    View article >>

  • Adaptive link dynamics drive online hate networks and their mainstream influence

    NPJ Complexity

    Online hate is dynamic, adaptive— and may soon surge with new AI/GPT tools. Establishing how hate operates at scale is key to overcoming it. We provide insights that challenge existing policies. Rather than large social media platforms being the key drivers, waves of adaptive links across smaller platforms connect the hate user base over time, fortifying hate networks, bypassing mitigations, and extending their direct influence into the massive neighboring mainstream. Data indicates that hundreds of thousands of people globally, including children, have been exposed. We present governing equations derived from first principles and a tipping-point condition predicting future surges in content transmission. Using the U.S. Capitol attack and a 2023 mass shooting as case studies, our findings offer actionable insights and quantitative predictions down to the hourly scale. The efficacy of proposed mitigations can now be predicted using these equations.

    Minzhang Zheng, Richard Sear, Lucia Illari, Nicholas Restrepo, Neil Johnson

    View article >>

  • Softening online extremes using network engineering

    IEEE Access

    The prevalence of dangerous misinformation and extreme views online has intensified since the onset of Israel-Hamas war on 7 October 2023. Social media platforms have long grappled with the challenge of providing effective mitigation schemes that can scale to the 5 billion-strong online population. Here, we introduce a novel solution grounded in online network engineering and demonstrate its potential through small pilot studies. We begin by outlining the characteristics of the online social network infrastructure that have rendered previous approaches to mitigating extremes ineffective. We then present our new online engineering scheme and explain how it circumvents these issues. The efficacy of this scheme is demonstrated through a pilot empirical study, which reveals that automatically assembling groups of users online with diverse opinions, guided by a map of the online social media infrastructure, and facilitating their anonymous interactions, can lead to a softening of extreme views. We then employ computer simulations to explore the potential for implementing this scheme online at scale and in an automated manner, without necessitating the contentious removal of specific communities, imposing censorship, relying on preventative messaging, or requiring consensus within the online groups. These pilot studies provide preliminary insights into the effectiveness and feasibility of this approach in online social media settings.

    Elvira Restrepo, Martin Moreno, Lucia Illari, Neil Johnson

    View article >>

  • Controlling bad-actor-artificial intelligence activity at scale across online battlefields

    PNAS Nexus

    We consider the looming threat of bad actors using artificial intelligence (AI)/Generative Pretrained Transformer to generate harms across social media globally. Guided by our detailed mapping of the online multiplatform battlefield, we offer answers to the key questions of what bad-actor-AI activity will likely dominate, where, when — and what might be done to control it at scale. Applying a dynamical Red Queen analysis from prior studies of cyber and automated algorithm attacks, predicts an escalation to daily bad-actor-AI activity by mid-2024 — just ahead of United States and other global elections. We then use an exactly solvable mathematical model of the observed bad-actor community clustering dynamics, to build a Policy Matrix which quantifies the outcomes and trade-offs between two potentially desirable outcomes: containment of future bad-actor-AI activity vs. its complete removal. We also give explicit plug-and-play formulae for associated risk measures.

    Neil Johnson, Richard Sear, Lucia Illari

    View article >>

  • Complexity of the online distrust ecosystem and its evolution

    Frontiers in Complex Systems

    Collective human distrust—and its associated mis/disinformation—is one of the most complex phenomena of our time, given that approximately 70% of the global population is now online. Current examples include distrust of medical expertise, climate change science, democratic election outcomes—and even distrust of fact-checked events in the current Israel-Hamas and Ukraine-Russia conflicts.

    Lucia Illari, Nicholas J. Restrepo, Neil Johnson

    View article >>

  • Inductive detection of influence operations via graph learning

    Scientific Reports

    Influence operations are large-scale efforts to manipulate public opinion. The rapid detection and disruption of these operations is critical for healthy public discourse. Emergent AI technologies may enable novel operations that evade detection and influence public discourse on social media with greater scale, reach, and specificity. New methods of detection with inductive learning capacity will be needed to identify novel operations before they indelibly alter public opinion and events. To this end, we develop an inductive learning framework that: (1) determines content- and graph-based indicators that are not specific to any operation; (2) uses graph learning to encode abstract signatures of coordinated manipulation; and (3) evaluates generalization capacity by training and testing models across operations originating from Russia, China, and Iran. We find that this framework enables strong cross-operation generalization while also revealing salient indicators-illustrating a generic approach which directly complements transductive methodologies, thereby enhancing detection coverage.

    Nicholas Gabriel, David Broniatowski, Neil Johnson

    View article >>

  • Explaining conflict violence in terms of conflict actor dynamics

    Scientific Reports

    We study the severity of conflict-related violence in Colombia at an unprecedented granular scale in space and across time. Splitting the data into different geographical regions and different historically-relevant periods, we uncover variations in the patterns of conflict severity which we then explain in terms of local conflict actors’ different collective behaviors and/or conditions using a simple mathematical model of conflict actors’ grouping dynamics (coalescence and fragmentation). Specifically, variations in the approximate scaling values of the distributions of event lethalities can be explained by the changing strength ratio of the local conflict actors for distinct conflict eras and organizational regions. In this way, our findings open the door to a new granular spectroscopy of human conflicts in terms of local conflict actor strength ratios for any armed conflict.

    Katerina Tkacova, Annette Idler, Neil Johnson, Eduardo López

    View article >>

  • Energy transfer in N-component nanosystems enhanced by pulse-driven vibronic many-body entanglement

    Scientific Reports

    The processing of energy by transfer and redistribution, plays a key role in the evolution of dynamical systems. At the ultrasmall and ultrafast scale of nanosystems, quantum coherence could in principle also play a role and has been reported in many pulse-driven nanosystems (e.g. quantum dots and even the microscopic Light-Harvesting Complex II (LHC-II) aggregate). Typical theoretical analyses cannot easily be scaled to describe these general N-component nanosystems; they do not treat the pulse dynamically; and they approximate memory effects. Here our aim is to shed light on what new physics might arise beyond these approximations. We adopt a purposely minimal model such that the time-dependence of the pulse is included explicitly in the Hamiltonian. This simple model generates complex dynamics: specifically, pulses of intermediate duration generate highly entangled vibronic (i.e. electronic-vibrational) states that spread multiple excitons – and hence energy – maximally within the system. Subsequent pulses can then act on such entangled states to efficiently channel subsequent energy capture. The underlying pulse-generated vibronic entanglement increases in strength and robustness as N increases.

    Fernando Gómez-Ruiz, Oscar Acevedo, Ferney Rodríguez, Luis Quiroga, Neil Johnson

    View article >>

  • Cavity-induced switching between Bell-state textures in a quantum dot

    Physical Review B

    Nanoscale quantum dots in microwave cavities can be used as a laboratory for exploring electron-electron interactions and their spin in the presence of quantized light and a magnetic field. We show how a simple theoretical model of this interplay at resonance predicts complex but measurable effects. New polariton states emerge that combine spin, relative modes, and radiation. These states have intricate spin-space correlations and undergo polariton transitions controlled by the microwave cavity field. We uncover novel topological effects involving highly correlated spin and charge density that display singlet-triplet and inhomogeneous Bell-state distributions. Signatures of these transitions are imprinted in the photon distribution, which will allow for optical read-out protocols in future experiments and nanoscale quantum technologies.

    Santiago Steven Beltrán Romero, Ferney Rodriguez, Luis Quiroga, Neil Johnson

    View article >>

  • Rise of post-pandemic resilience across the distrust ecosystem

    Scientific Reports

    Why does online distrust (e.g., of medical expertise) continue to grow despite numerous mitigation efforts? We analyzed changing discourse within a Facebook ecosystem of approximately 100 million users who were focused pre-pandemic on vaccine (dis)trust. Post-pandemic, their discourse interconnected multiple non-vaccine topics and geographic scales within and across communities. This interconnection confers a unique, system-level (i.e., at the scale of the full network) resistance to mitigations targeting isolated topics or geographic scales—an approach many schemes take due to constrained funding. For example, focusing on local health issues but not national elections. Backed by numerical simulations, we propose counterintuitive solutions for more effective, scalable mitigation: utilize “glocal” messaging by blending (1) strategic topic combinations (e.g., messaging about specific diseases with climate change) and (2) geographic scales (e.g., combining local and national focuses).

    Lucia Illari, Nicholas Johnson Restrepo, Neil Johnson

    View article >>

    View video summary >>

  • Shockwavelike Behavior across Social Media

    Physical Review Letters

    Online communities featuring “anti-X” hate and extremism, somehow thrive online despite moderator pressure. We present a first-principles theory of their dynamics, which accounts for the fact that the online population comprises diverse individuals and evolves in time. The resulting equation represents a novel generalization of nonlinear fluid physics and explains the observed behavior across scales. Its shockwavelike solutions explain how, why, and when such activity rises from “out-of-nowhere,” and show how it can be delayed, reshaped, and even prevented by adjusting the online collective chemistry. This theory and findings should also be applicable to anti-X activity in next-generation ecosystems featuring blockchain platforms and Metaverses.

    Pedro Manrique, Frank Yingjie Huo, Sara El Oud, Minzhang Zheng, Lucia Illari, and Neil Johnson

    View article >>

  • Stochastic Modeling of Possible Pasts to Illuminate Future Risk

    Oxford Academic

    Disasters are fortunately uncommon events. Far more common are events that lead to societal crises, which are notable in their impact, but fall short of causing a disaster. Such near-miss events may be reimagined through stochastic modeling to be worse than they actually were. These are termed downward counterfactuals. A spectrum of reimagined events, covering both natural and man-made hazards, are considered. Included is a counterfactual version of the Middle East Respiratory Syndrome (MERS). Attention to this counterfactual coronavirus in 2015 would have prepared the world better for COVID-19.

    Gordon Woo, Neil Johnson

    View article >>

  • Offline events and online hate

    PLOS One

    Online hate speech is a critical and worsening problem, with extremists using social media platforms to radicalize recruits and coordinate offline violent events. While much progress has been made in analyzing online hate speech, no study to date has classified multiple types of hate speech across both mainstream and fringe platforms. We conduct a supervised machine learning analysis of 7 types of online hate speech on 6 interconnected online platforms. We find that offline trigger events, such as protests and elections, are often followed by increases in types of online hate speech that bear seemingly little connection to the underlying event. This occurs on both mainstream and fringe platforms, despite moderation efforts, raising new research questions about the relationship between offline events and online speech, as well as implications for online content moderation.

    Yonatan Lupu, Richard Sear, Nicolas Velásquez, Rhys Leahy, Nicholas Johnson Restrepo, Beth Goldberg, Neil Johnson

    View article >>

  • Losing the battle over best-science guidance early in a crisis: COVID-19 and beyond

    Science Advances

    Ensuring widespread public exposure to best-science guidance is crucial in any crisis, e.g., coronavirus disease 2019 (COVID-19), monkeypox, abortion misinformation, climate change, and beyond. We show how this battle got lost on Facebook very early during the COVID-19 pandemic and why the mainstream majority, including many parenting communities, had already moved closer to more extreme communities by the time vaccines arrived. Hidden heterogeneities in terms of who was talking and listening to whom explain why Facebook’s own promotion of best-science guidance also appears to have missed key audience segments. A simple mathematical model reproduces the exposure dynamics at the system level. Our findings could be used to tailor guidance at scale while accounting for individual diversity and to help predict tipping point behavior and system-level responses to interventions in future crises.

    Lucia Illari, Nicholas J. Restrepo, Neil F. Johnson

    View article >>

  • Dynamic Topic Modeling Reveals Variations in Online Hate Narratives

    Intelligent Computing

    Online hate speech can precipitate and also follow real-world violence, such as the U.S. Capitol attack on January 6, 2021. However, the current volume of content and the wide variety of extremist narratives raise major challenges for social media companies in terms of tracking and mitigating the activity of hate groups and broader extremist movements. This is further complicated by the fact that hate groups and extremists can leverage multiple platforms in tandem in order to adapt and circumvent content moderation within any given platform (e.g. Facebook). We show how the computational approach of dynamic Latent Dirichlet Allocation (LDA) may be applied to analyze similarities and differences between online content that is shared across social media platforms by extremist communities, including Facebook, Gab, Telegram, and VK between January and April 2021. We also discuss characteristics revealed by unsupervised machine learning about how hate groups leverage sites to organize, recruit, and coordinate within and across such online platforms.

    Richard Sear, Nicholas Johnson Restrepo, Yonatan Lupu, Neil F. Johnson

    View article >>

  • The Multiverse of Online Extremism is Held Together by Hate’s Gravity

    Exploring Hate (Brookings)

    Rhys Leahy, Nicolas Velásquez, Nicholas Johnson Restrepo, Yonatan Lupu, Beth Goldberg, Neil F. Johnson

    View book info >>

  • Connectivity Between Russian Information Sources and Extremist Communities Across Social Media Platforms

    Frontiers in Political Science

    The current military conflict between Russia and Ukraine is accompanied by disinformation and propaganda within the digital ecosystem of social media platforms and online news sources. One month prior to the conflict’s February 2022 start, a Special Report by the U.S. Department of State had already highlighted concern about the extent to which Kremlin-funded media were feeding the online disinformation and propaganda ecosystem. Here we address a closely related issue: how Russian information sources feed into online extremist communities. Specifically, we present a preliminary study of how the sector of the online ecosystem involving extremist communities interconnects within and across social media platforms, and how it connects into such official information sources. Our focus here is on Russian domains, European Nationalists, and American White Supremacists. Though necessarily very limited in scope, our study goes beyond many existing works that focus on Twitter, by instead considering platforms such as VKontakte, Telegram, and Gab. Our findings can help shed light on the scope and impact of state-sponsored foreign influence operations. Our study also highlights the need to develop a detailed map of the full multi-platform ecosystem in order to better inform discussions aimed at countering violent extremism.

    Rhys Leahy, Nicholas Johnson Restrepo, Richard Sear, Neil F. Johnson

    View article >>

  • Using Neural Architectures to Model Complex Dynamical Systems

    Advances in Artificial Intelligence and Machine Learning

    The natural, physical and social worlds abound with feedback processes that make the challenge of modeling the underlying system an extremely complex one. This paper proposes an end-to-end deep learning approach to modelling such so-called complex systems which addresses two problems: (1) scientific model discovery when we have only incomplete/partial knowledge of system dynamics; (2) integration of graph-structured data into scientific machine learning (SciML) using graph neural networks. It is well known that deep learning (DL) has had remarkable success in leveraging large amounts of unstructured data into downstream tasks such as clustering, classification, and regression. Recently, the development of graph neural networks has extended DL techniques to graph structured data of complex systems. However, DL methods still appear largely disjointed with established scientific knowledge, and the contribution to basic science is not always apparent. This disconnect has spurred the development of physics-informed deep learning, and more generally, the emerging discipline of SciML. Modelling complex systems in the physical, biological, and social sciences within the SciML framework requires further considerations. We argue the need to consider heterogeneous, graph-structured data as well as the effective scale at which we can observe system dynamics. Our proposal would open up a joint approach to the previously distinct fields of graph representation learning and SciML.

    Nicholas Gabriel, Neil F. Johnson

    View article >>

  • Machine Learning Reveals Adaptive COVID-19 Narratives in Online Anti-Vaccination Network

    Proceedings of the 2021 Conference of The Computational Social Science Society of the Americas

    The COVID-19 pandemic sparked an online “infodemic” of potentially dangerous misinformation. We use machine learning to quantify COVID-19 content from opponents of establishment health guidance, in particular vaccination. We quantify this content in two different ways: number of topics and evolution of keywords. We find that, even in the early stages of the pandemic, the anti-vaccination community had the infrastructure to more effectively garner support than their pro-vaccination counterparts by exhibiting a broader array of discussion topics. This provided an advantage in terms of attracting new users seeking COVID-19 guidance online. We also find that our machine learning framework can pick up on the adaptive nature of discussions within the anti-vaccination community, tracking distrust of authorities, opposition to lockdown orders, and an interest in early vaccine trials. Our approach is scalable and hence tackles the urgent problem facing social media platforms of having to analyze huge volumes of online health misinformation. With vaccine booster shots being approved and vaccination rates stagnating, such an automated approach is key in understanding how to combat the misinformation that slows the eradication of the pandemic.

    Richard Sear, Rhys Leahy, Nicholas Johnson Restrepo, Yonatan Lupu, Neil Johnson

    View article >>

  • Dynamic Latent Dirichlet Allocation Tracks Evolution of Online Hate Topics

    Advances in Artificial Intelligence and Machine Learning

    Not only can online hate content spread easily between social media platforms, but its focus can also evolve over time. Machine learning and other artificial intelligence (AI) tools could play a key role in helping human moderators understand how such hate topics are evolving online. Latent Dirichlet Allocation (LDA) has been shown to be able to identify hate topics from a corpus of text associated with online communities that promote hate. However, applying LDA to each day’s data is impractical since the inferred topic list from the optimization can change abruptly from day to day, even though the underlying text and hence topics do not typically change this quickly. Hence, LDA is not well suited to capture the way in which hate topics evolve and morph. Here we solve this problem by showing that a dynamic version of LDA can help capture this evolution of topics surrounding online hate. Specifically, we show how standard and dynamical LDA models can be used in conjunction to analyze the topics over time emerging from extremist communities across multiple moderated and unmoderated social media platforms. Our dataset comprises material that we have gathered from hate-related communities on Facebook, Telegram, and Gab during the time period January-April 2021. We demonstrate the ability of dynamic LDA to shed light on how hate groups use different platforms in order to propagate their cause and interests across the online multiverse of social media platforms.

    Richard Sear, Rhys Leahy, Nicholas Johnson Restrepo, Yonatan Lupu, Neil F. Johnson

    View article >>