Nonlinear spreading behavior across multi-platform social media universe

Chaos: An Interdisciplinary Journal of Nonlinear Science

Understanding how harmful content (mis/disinformation, hate, etc.) manages to spread among online communities within and across social media platforms represents an urgent societal challenge. We develop a non-linear dynamical model for such viral spreading, which accounts for the fact that online communities dynamically interconnect across multiple social media platforms. Our mean-field theory (Effective Medium Theory) compares well to detailed numerical simulations and provides a specific analytic condition for the onset of outbreaks (i.e., system-wide spreading). Even if the infection rate is significantly lower than the recovery rate, it predicts system-wide spreading if online communities create links between them at high rates and the loss of such links (e.g., due to moderator pressure) is low. Policymakers should, therefore, account for these multi-community dynamics when shaping policies against system-wide spreading.

Chenkai Xia, Neil Johnson

View article >>

Adaptive link dynamics drive online hate networks and their mainstream influence

NPJ Complexity

Online hate is dynamic, adaptive— and may soon surge with new AI/GPT tools. Establishing how hate operates at scale is key to overcoming it. We provide insights that challenge existing policies. Rather than large social media platforms being the key drivers, waves of adaptive links across smaller platforms connect the hate user base over time, fortifying hate networks, bypassing mitigations, and extending their direct influence into the massive neighboring mainstream. Data indicates that hundreds of thousands of people globally, including children, have been exposed. We present governing equations derived from first principles and a tipping-point condition predicting future surges in content transmission. Using the U.S. Capitol attack and a 2023 mass shooting as case studies, our findings offer actionable insights and quantitative predictions down to the hourly scale. The efficacy of proposed mitigations can now be predicted using these equations.

Minzhang Zheng, Richard Sear, Lucia Illari, Nicholas Restrepo, Neil Johnson

View article >>

Softening online extremes using network engineering

IEEE Access

The prevalence of dangerous misinformation and extreme views online has intensified since the onset of Israel-Hamas war on 7 October 2023. Social media platforms have long grappled with the challenge of providing effective mitigation schemes that can scale to the 5 billion-strong online population. Here, we introduce a novel solution grounded in online network engineering and demonstrate its potential through small pilot studies. We begin by outlining the characteristics of the online social network infrastructure that have rendered previous approaches to mitigating extremes ineffective. We then present our new online engineering scheme and explain how it circumvents these issues. The efficacy of this scheme is demonstrated through a pilot empirical study, which reveals that automatically assembling groups of users online with diverse opinions, guided by a map of the online social media infrastructure, and facilitating their anonymous interactions, can lead to a softening of extreme views. We then employ computer simulations to explore the potential for implementing this scheme online at scale and in an automated manner, without necessitating the contentious removal of specific communities, imposing censorship, relying on preventative messaging, or requiring consensus within the online groups. These pilot studies provide preliminary insights into the effectiveness and feasibility of this approach in online social media settings.

Elvira Restrepo, Martin Moreno, Lucia Illari, Neil Johnson

View article >>

Predicting and Controlling Bad Actor Artificial Intelligence

Templeton Ideas

In an era of super-accelerated technological advancement, the specter of malevolent artificial intelligence (AI) looms large. While AI holds promise for transforming industries and enhancing human life, the potential for abuse poses significant societal risks. Threats include avalanches of misinformation, deepfake videos, voice mimicry, sophisticated phishing scams, inflammatory ethnic and religious rhetoric, and autonomous weapons that make life-and-death decisions without human intervention.

Read the full article >>

US Researchers Prepare a Map of Sources of Harmful Content

Warsaw Business Journal

Upcoming elections in over 50 countries, including the US and Poland, will encourage creators of harmful content to increase their activities using artificial intelligence. The most deepfakes are likely to be created in the summer of this year. Analysts have investigated which places in the digital world are incubators for “bad actors” and have created a map of them. Small platforms are the main source of the creation and distribution of harmful content.

Read the full article >>