Multispecies Cohesion: Humans, Machinery, AI, and Beyond

Physical Review Letters

The global chaos caused by the July 19, 2024 technology meltdown highlights the need for a theory of what large-scale cohesive behaviors—dangerous or desirable—could suddenly emerge from future systems of interacting humans, machinery, and software, including artificial intelligence; when they will emerge; and how they will evolve and be controlled. Here, we offer answers by introducing an aggregation model that accounts for the interacting entities’ inter- and intraspecies diversities. It yields a novel multidimensional generalization of existing aggregation physics. We derive exact analytic solutions for the time to cohesion and growth of cohesion for two species, and some generalizations for an arbitrary number of species. These solutions reproduce—and offer a microscopic explanation for—an anomalous nonlinear growth feature observed in various current real-world systems. Our theory suggests good and bad “surprises” will appear sooner and more strongly as humans, machinery, artificial intelligence, and so on interact more, but it also offers a rigorous approach for understanding and controlling this.

Frank Yingjie Huo, Pedro Manrique, Neil Johnson

Contact us for full paper!

View article >>

How U.S. Presidential elections strengthen global hate networks

NPJ Complexity

Local or national politics can be a catalyst for potentially dangerous hate speech. But with a third of the world’s population eligible to vote in 2024 elections, we need an understanding of how individual-level hate multiplies up to the collective global scale. We show, based on the most recent U.S. presidential election, that offline events are associated with rapid adaptations of the global online hate universe that strengthens both its network-of-networks structure and the types of hate content that it collectively produces. Approximately 50 million accounts in hate communities are drawn closer to each other and to a broad mainstream of billions. The election triggered new hate content at scale around immigration, ethnicity, and antisemitism that aligns with conspiracy theories about Jewish-led replacement. Telegram acts as a key hardening agent; yet, it is overlooked by U.S. Congressional hearings and new E.U. legislation. Because the hate universe has remained robust since 2020, anti-hate messaging surrounding global events (e.g., upcoming elections or the war in Gaza) should pivot to blending multiple hate types while targeting previously untouched social media structures.

Akshay Verma, Richard Sear, Neil Johnson

View article >>

Non-equilibrium physics of multi-species assembly applied to fibrils inhibition in biomolecular condensates and growth of online distrust

Scientific Reports

Self-assembly is a key process in living systems—from the microscopic biological level (e.g. assembly of proteins into fibrils within biomolecular condensates in a human cell) through to the macroscopic societal level (e.g. assembly of humans into common-interest communities across online social media platforms). The components in such systems (e.g. macromolecules, humans) are highly diverse, and so are the self-assembled structures that they form. However, there is no simple theory of how such structures assemble from a multi-species pool of components. Here we provide a very simple model which trades myriad chemical and human details for a transparent analysis, and yields results in good agreement with recent empirical data. It reveals a new inhibitory role for biomolecular condensates in the formation of dangerous amyloid fibrils, as well as a kinetic explanation of why so many diverse distrust movements are now emerging across social media. The nonlinear dependencies that we uncover suggest new real-world control strategies for such multi-species assembly.

Pedro Manrique, Frank Yingjie Huo, Sara El Oud, Neil Johnson

View article >>

Nonlinear spreading behavior across multi-platform social media universe

Chaos: An Interdisciplinary Journal of Nonlinear Science

Understanding how harmful content (mis/disinformation, hate, etc.) manages to spread among online communities within and across social media platforms represents an urgent societal challenge. We develop a non-linear dynamical model for such viral spreading, which accounts for the fact that online communities dynamically interconnect across multiple social media platforms. Our mean-field theory (Effective Medium Theory) compares well to detailed numerical simulations and provides a specific analytic condition for the onset of outbreaks (i.e., system-wide spreading). Even if the infection rate is significantly lower than the recovery rate, it predicts system-wide spreading if online communities create links between them at high rates and the loss of such links (e.g., due to moderator pressure) is low. Policymakers should, therefore, account for these multi-community dynamics when shaping policies against system-wide spreading.

Chenkai Xia, Neil Johnson

View article >>

Adaptive link dynamics drive online hate networks and their mainstream influence

NPJ Complexity

Online hate is dynamic, adaptive— and may soon surge with new AI/GPT tools. Establishing how hate operates at scale is key to overcoming it. We provide insights that challenge existing policies. Rather than large social media platforms being the key drivers, waves of adaptive links across smaller platforms connect the hate user base over time, fortifying hate networks, bypassing mitigations, and extending their direct influence into the massive neighboring mainstream. Data indicates that hundreds of thousands of people globally, including children, have been exposed. We present governing equations derived from first principles and a tipping-point condition predicting future surges in content transmission. Using the U.S. Capitol attack and a 2023 mass shooting as case studies, our findings offer actionable insights and quantitative predictions down to the hourly scale. The efficacy of proposed mitigations can now be predicted using these equations.

Minzhang Zheng, Richard Sear, Lucia Illari, Nicholas Restrepo, Neil Johnson

View article >>

Softening online extremes using network engineering

IEEE Access

The prevalence of dangerous misinformation and extreme views online has intensified since the onset of Israel-Hamas war on 7 October 2023. Social media platforms have long grappled with the challenge of providing effective mitigation schemes that can scale to the 5 billion-strong online population. Here, we introduce a novel solution grounded in online network engineering and demonstrate its potential through small pilot studies. We begin by outlining the characteristics of the online social network infrastructure that have rendered previous approaches to mitigating extremes ineffective. We then present our new online engineering scheme and explain how it circumvents these issues. The efficacy of this scheme is demonstrated through a pilot empirical study, which reveals that automatically assembling groups of users online with diverse opinions, guided by a map of the online social media infrastructure, and facilitating their anonymous interactions, can lead to a softening of extreme views. We then employ computer simulations to explore the potential for implementing this scheme online at scale and in an automated manner, without necessitating the contentious removal of specific communities, imposing censorship, relying on preventative messaging, or requiring consensus within the online groups. These pilot studies provide preliminary insights into the effectiveness and feasibility of this approach in online social media settings.

Elvira Restrepo, Martin Moreno, Lucia Illari, Neil Johnson

View article >>

Controlling bad-actor-artificial intelligence activity at scale across online battlefields

PNAS Nexus

We consider the looming threat of bad actors using artificial intelligence (AI)/Generative Pretrained Transformer to generate harms across social media globally. Guided by our detailed mapping of the online multiplatform battlefield, we offer answers to the key questions of what bad-actor-AI activity will likely dominate, where, when — and what might be done to control it at scale. Applying a dynamical Red Queen analysis from prior studies of cyber and automated algorithm attacks, predicts an escalation to daily bad-actor-AI activity by mid-2024 — just ahead of United States and other global elections. We then use an exactly solvable mathematical model of the observed bad-actor community clustering dynamics, to build a Policy Matrix which quantifies the outcomes and trade-offs between two potentially desirable outcomes: containment of future bad-actor-AI activity vs. its complete removal. We also give explicit plug-and-play formulae for associated risk measures.

Neil Johnson, Richard Sear, Lucia Illari

View article >>

Complexity of the online distrust ecosystem and its evolution

Frontiers in Complex Systems

Collective human distrust—and its associated mis/disinformation—is one of the most complex phenomena of our time, given that approximately 70% of the global population is now online. Current examples include distrust of medical expertise, climate change science, democratic election outcomes—and even distrust of fact-checked events in the current Israel-Hamas and Ukraine-Russia conflicts.

Lucia Illari, Nicholas J. Restrepo, Neil Johnson

View article >>

Inductive detection of influence operations via graph learning

Scientific Reports

Influence operations are large-scale efforts to manipulate public opinion. The rapid detection and disruption of these operations is critical for healthy public discourse. Emergent AI technologies may enable novel operations that evade detection and influence public discourse on social media with greater scale, reach, and specificity. New methods of detection with inductive learning capacity will be needed to identify novel operations before they indelibly alter public opinion and events. To this end, we develop an inductive learning framework that: (1) determines content- and graph-based indicators that are not specific to any operation; (2) uses graph learning to encode abstract signatures of coordinated manipulation; and (3) evaluates generalization capacity by training and testing models across operations originating from Russia, China, and Iran. We find that this framework enables strong cross-operation generalization while also revealing salient indicators-illustrating a generic approach which directly complements transductive methodologies, thereby enhancing detection coverage.

Nicholas Gabriel, David Broniatowski, Neil Johnson

View article >>