This year’s elections around the world are under fire from disinformation and deepfakes. Researchers from the USA have prepared a map of the sources of harmful content

Newseria

Upcoming elections in more than 50 countries, including the U.S. and Poland, will encourage harmful content creators to step up their efforts to use artificial intelligence. The largest number of deepfakes is likely to be created this summer. Analysts have looked at which places in the digital world are incubators for the activities of “bad actors” and have mapped them. Small platforms are the main source of harmful content production and dissemination. In this context, the EU’s Digital Services Act can be seen as misguided, as such small platforms will be practically beyond the control of the regulations. Scientists suggest taking actions in the fight against this phenomenon based on realistic scenarios, and eliminating it completely is not one of them. Therefore, it is better to limit the effects of disinformation.

Read the full article >>

Predicting the risk of bad-actor-AI

Scienmag

Bad actors are predicted to begin using AI daily by the middle of 2024, according to a study. Neil F. Johnson and colleagues map the online landscape of communities centered around hate, beginning by searching for terms found in the Anti-Defamation League Hate Symbols Database, along with the names of hate groups tracked by the Southern Poverty Law Center. From an initial list of “bad-actor” communities found using these terms, the authors assess communities linked to by the bad-actor communities.

Read the full article >>

Controlling bad-actor-artificial intelligence activity at scale across online battlefields

PNAS Nexus

We consider the looming threat of bad actors using artificial intelligence (AI)/Generative Pretrained Transformer to generate harms across social media globally. Guided by our detailed mapping of the online multiplatform battlefield, we offer answers to the key questions of what bad-actor-AI activity will likely dominate, where, when — and what might be done to control it at scale. Applying a dynamical Red Queen analysis from prior studies of cyber and automated algorithm attacks, predicts an escalation to daily bad-actor-AI activity by mid-2024 — just ahead of United States and other global elections. We then use an exactly solvable mathematical model of the observed bad-actor community clustering dynamics, to build a Policy Matrix which quantifies the outcomes and trade-offs between two potentially desirable outcomes: containment of future bad-actor-AI activity vs. its complete removal. We also give explicit plug-and-play formulae for associated risk measures.

Neil Johnson, Richard Sear, Lucia Illari

View article >>