GW Press Office
New research published today in the journal npj Complexity shows that online hate thrives because of a hidden inner web of many small social media platforms – not the few large platforms such as Twitter (X) and Facebook (Meta).
Media coverage related to the team’s work
GW Press Office
New research published today in the journal npj Complexity shows that online hate thrives because of a hidden inner web of many small social media platforms – not the few large platforms such as Twitter (X) and Facebook (Meta).
Elliott School Press Office
In her latest article, “Softening Online Extremes Using Network Engineering,” Elliott School Associate Professor Elvira-Maria Restrepo and her co-authors Martin Moreno, Lucia Illari, and Neil F. Johnson offer solutions for mitigating dangerous misinformation and extreme views online.
Templeton Ideas
In an era of super-accelerated technological advancement, the specter of malevolent artificial intelligence (AI) looms large. While AI holds promise for transforming industries and enhancing human life, the potential for abuse poses significant societal risks. Threats include avalanches of misinformation, deepfake videos, voice mimicry, sophisticated phishing scams, inflammatory ethnic and religious rhetoric, and autonomous weapons that make life-and-death decisions without human intervention.
ABC 27
Nearly two dozen tech companies are promising to combat AI generated deep fakes designed to trick voters online but with the 2024 presidential election around the corner pro regulation advocates and some lawmakers are pushing for more.
Yahoo News UK
As multiple large technology companies signed a pact to stop AI tools being used to disrupt elections, the technology has evolved to a point where it poses a significant threat.
Warsaw Business Journal
Upcoming elections in over 50 countries, including the US and Poland, will encourage creators of harmful content to increase their activities using artificial intelligence. The most deepfakes are likely to be created in the summer of this year. Analysts have investigated which places in the digital world are incubators for “bad actors” and have created a map of them. Small platforms are the main source of the creation and distribution of harmful content.
Newseria
Upcoming elections in more than 50 countries, including the U.S. and Poland, will encourage harmful content creators to step up their efforts to use artificial intelligence. The largest number of deepfakes is likely to be created this summer. Analysts have looked at which places in the digital world are incubators for the activities of “bad actors” and have mapped them. Small platforms are the main source of harmful content production and dissemination. In this context, the EU’s Digital Services Act can be seen as misguided, as such small platforms will be practically beyond the control of the regulations. Scientists suggest taking actions in the fight against this phenomenon based on realistic scenarios, and eliminating it completely is not one of them. Therefore, it is better to limit the effects of disinformation.
Scientific American
AI-generated disinformation will target voters on a near-daily basis in more than 50 countries, according to a new analysis
Getting to the Bottom of It Podcast, GW Hatchet
On this week’s episode of Getting to the Bottom of It, host Lizzie Jensen spoke with GW professor of physics, Neil Johnson, on his recently published research outlining the future of Artificial Intelligence and how “bad-actors” can use it to manipulate information.
Tech Target
The social media giant’s decision to label AI-generated content is a good first step, some observers say, but does not eradicate the problem of disinformation in an election year.