Al Jazeera
At least 20 states have passed regulations against election deepfakes, but federal action remains stalled.
Media coverage related to the team’s work
Al Jazeera
At least 20 states have passed regulations against election deepfakes, but federal action remains stalled.
Education, The Creative Process Podcast
How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?
Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media.
GW Hatchet
Researchers found online hate develops on smaller social media platforms instead of mainstream ones in a study published earlier this month.
Springer-Nature “Behind the Paper”
A first-of-its kind network map of the online hate ecosystem provides new insight into decentralized behavior during January 6, 2021, and its implications for 2024 and beyond
GW Press Office
New research published today in the journal npj Complexity shows that online hate thrives because of a hidden inner web of many small social media platforms – not the few large platforms such as Twitter (X) and Facebook (Meta).
Elliott School Press Office
In her latest article, “Softening Online Extremes Using Network Engineering,” Elliott School Associate Professor Elvira-Maria Restrepo and her co-authors Martin Moreno, Lucia Illari, and Neil F. Johnson offer solutions for mitigating dangerous misinformation and extreme views online.
Templeton Ideas
In an era of super-accelerated technological advancement, the specter of malevolent artificial intelligence (AI) looms large. While AI holds promise for transforming industries and enhancing human life, the potential for abuse poses significant societal risks. Threats include avalanches of misinformation, deepfake videos, voice mimicry, sophisticated phishing scams, inflammatory ethnic and religious rhetoric, and autonomous weapons that make life-and-death decisions without human intervention.
ABC 27
Nearly two dozen tech companies are promising to combat AI generated deep fakes designed to trick voters online but with the 2024 presidential election around the corner pro regulation advocates and some lawmakers are pushing for more.
Yahoo News UK
As multiple large technology companies signed a pact to stop AI tools being used to disrupt elections, the technology has evolved to a point where it poses a significant threat.
Warsaw Business Journal
Upcoming elections in over 50 countries, including the US and Poland, will encourage creators of harmful content to increase their activities using artificial intelligence. The most deepfakes are likely to be created in the summer of this year. Analysts have investigated which places in the digital world are incubators for “bad actors” and have created a map of them. Small platforms are the main source of the creation and distribution of harmful content.