Adaptive link dynamics drive online hate networks and their mainstream influence

NPJ Complexity

Online hate is dynamic, adaptive— and may soon surge with new AI/GPT tools. Establishing how hate operates at scale is key to overcoming it. We provide insights that challenge existing policies. Rather than large social media platforms being the key drivers, waves of adaptive links across smaller platforms connect the hate user base over time, fortifying hate networks, bypassing mitigations, and extending their direct influence into the massive neighboring mainstream. Data indicates that hundreds of thousands of people globally, including children, have been exposed. We present governing equations derived from first principles and a tipping-point condition predicting future surges in content transmission. Using the U.S. Capitol attack and a 2023 mass shooting as case studies, our findings offer actionable insights and quantitative predictions down to the hourly scale. The efficacy of proposed mitigations can now be predicted using these equations.

Minzhang Zheng, Richard Sear, Lucia Illari, Nicholas Restrepo, Neil Johnson

View article >>

Softening online extremes using network engineering

IEEE Access

The prevalence of dangerous misinformation and extreme views online has intensified since the onset of Israel-Hamas war on 7 October 2023. Social media platforms have long grappled with the challenge of providing effective mitigation schemes that can scale to the 5 billion-strong online population. Here, we introduce a novel solution grounded in online network engineering and demonstrate its potential through small pilot studies. We begin by outlining the characteristics of the online social network infrastructure that have rendered previous approaches to mitigating extremes ineffective. We then present our new online engineering scheme and explain how it circumvents these issues. The efficacy of this scheme is demonstrated through a pilot empirical study, which reveals that automatically assembling groups of users online with diverse opinions, guided by a map of the online social media infrastructure, and facilitating their anonymous interactions, can lead to a softening of extreme views. We then employ computer simulations to explore the potential for implementing this scheme online at scale and in an automated manner, without necessitating the contentious removal of specific communities, imposing censorship, relying on preventative messaging, or requiring consensus within the online groups. These pilot studies provide preliminary insights into the effectiveness and feasibility of this approach in online social media settings.

Elvira Restrepo, Martin Moreno, Lucia Illari, Neil Johnson

View article >>

Predicting and Controlling Bad Actor Artificial Intelligence

Templeton Ideas

In an era of super-accelerated technological advancement, the specter of malevolent artificial intelligence (AI) looms large. While AI holds promise for transforming industries and enhancing human life, the potential for abuse poses significant societal risks. Threats include avalanches of misinformation, deepfake videos, voice mimicry, sophisticated phishing scams, inflammatory ethnic and religious rhetoric, and autonomous weapons that make life-and-death decisions without human intervention.

Read the full article >>

US Researchers Prepare a Map of Sources of Harmful Content

Warsaw Business Journal

Upcoming elections in over 50 countries, including the US and Poland, will encourage creators of harmful content to increase their activities using artificial intelligence. The most deepfakes are likely to be created in the summer of this year. Analysts have investigated which places in the digital world are incubators for “bad actors” and have created a map of them. Small platforms are the main source of the creation and distribution of harmful content.

Read the full article >>

This year’s elections around the world are under fire from disinformation and deepfakes. Researchers from the USA have prepared a map of the sources of harmful content

Newseria

Upcoming elections in more than 50 countries, including the U.S. and Poland, will encourage harmful content creators to step up their efforts to use artificial intelligence. The largest number of deepfakes is likely to be created this summer. Analysts have looked at which places in the digital world are incubators for the activities of “bad actors” and have mapped them. Small platforms are the main source of harmful content production and dissemination. In this context, the EU’s Digital Services Act can be seen as misguided, as such small platforms will be practically beyond the control of the regulations. Scientists suggest taking actions in the fight against this phenomenon based on realistic scenarios, and eliminating it completely is not one of them. Therefore, it is better to limit the effects of disinformation.

Read the full article >>