Tech XPlore
AI models like ChatGPT have amazed the world with their ability to write poetry, solve equations and even pass medical exams. But they can also churn out harmful content, or promote disinformation.
Tech XPlore
AI models like ChatGPT have amazed the world with their ability to write poetry, solve equations and even pass medical exams. But they can also churn out harmful content, or promote disinformation.
ScienceBlog
Researchers have unlocked the mathematical secrets behind artificial intelligence’s most perplexing behaviors, potentially paving the way for safer and more reliable AI systems. A George Washington University physics team has developed the first comprehensive theory explaining why models like ChatGPT sometimes repeat themselves endlessly, make things up, or generate harmful content even from innocent questions.
GW Press Release
AI models like ChatGPT have amazed the world with their ability to write poetry, solve equations and even pass medical exams. But they also can churn out harmful content, or promote disinformation.
Ministry of AI
As we zoom deeper into the remarkable world of Artificial Intelligence (AI), specifically Large Language Models (LLMs) like ChatGPT, understanding how they work becomes increasingly vital. Ever wondered why these systems sometimes repeat themselves, imagine bizarre things, or appear biased? A recent research article dives into the core mechanics of these models, explaining their magic through the lens of physics—a fascinating twist that makes this topic both enlightening and accessible!
Ministry of AI
A detailed breakdown of the AI research paper: Capturing AI’s Attention: Physics of Repetition, Hallucination, Bias and Beyond
We derive a first-principles physics theory of the AI engine at the heart of LLMs’ ‘magic’ (e.g. ChatGPT, Claude): the basic Attention head. The theory allows a quantitative analysis of outstanding AI challenges such as output repetition, hallucination and harmful content, and bias (e.g. from training and fine-tuning). Its predictions are consistent with large-scale LLM outputs. Its 2-body form suggests why LLMs work so well, but hints that a generalized 3-body Attention would make such AI work even better. Its similarity to a spin-bath means that existing Physics expertise could immediately be harnessed to help Society ensure AI is trustworthy and resilient to manipulation.
Frank Yingjie Huo, Neil F. Johnson
The Pinnacle Gazette
The new feature encourages user collaboration to add insights and combat misinformation.
TechBullion
Meta’s upcoming Community Notes feature for monitoring misinformation through crowdsourcing will use some technology developed by Elon Musk’s X for its similar service.
NBC News
The feature will roll out on March 18 on Facebook, Instagram and Threads in the United States.
CNBC
Meta’s upcoming Community Notes feature for monitoring misinformation through crowdsourcing will use some technology developed by Elon Musk’s X for its similar service.