Media Coverage

  • “Big Sunscreen”: When Misinformation Fuels Extremist Conspiracy Theories

    Beauty Matter

    A broad spectrum of “sunscreen truthers” on platforms like TikTok, Instagram, and Facebook have peddled the trope that sunscreen causes cancer for at least the last decade. These types of conspiracy theories have reached a fever pitch on social media since the pandemic. From vegan anti-vaxxers and bro-biohackers to MAHA and QAnon supporters, they all have two things in common: a case of chemophobia and a belief that sunscreen is the enemy.

    Read the full article >>

  • The Root of AI Hallucinations: Physics Theory Digs Into the ‘Attention’ Flaw

    Security Week

    No-one really understands how AI works or when and why it doesn’t. But the application of first-principle physics theory to the working of AI’s Attention mechanism is providing new insights.

    Read the full article >>

  • AI Created Imaginary Books for Summer Reading List

    Newswise

    The Chicago Sun-Times and the Philadelphia Inquirer recently published stories with unidentifiable quotes from fake experts and imaginary book titles created by AI.

    Read the full article >>

  • When does good AI go bad?

    Gadget

    A new study explores when and why the output of large language models goes awry and becomes a threat.

    Read the full article >>

  • Physics Breakthrough Reveals Why AI Systems Can Suddenly Turn On You

    NeuroEdge

    Researchers at George Washington University have developed a groundbreaking mathematical formula that predicts exactly when artificial intelligence systems like ChatGPT will suddenly shift from helpful to harmful responses – a phenomenon they’ve dubbed the “Jekyll-and-Hyde tipping point.” The new research may finally answer why AI sometimes abruptly goes off the rails.

    Read the full article >>

  • Exploring the ‘Jekyll-and-Hyde tipping point’ in AI

    Tech Xplore

    Language learning machines, such as ChatGPT, have become proficient in solving complex mathematical problems, passing difficult exams, and even offering advice for interpersonal conflicts. However, at what point does a helpful tool become a threat?

    Read the full article >>

  • New Paper Explores Jekyll and Hyde Tipping Point in AI

    Newswise

    Newswise — Language learning machines, such as ChatGPT, have become proficient in solving complex mathematical problems, passing difficult exams, and even offering advice for interpersonal conflicts. However, at what point does a helpful tool become a threat?

    Read the full article >>

  • AI Jekyll-Hyde Tipping Point Formula

    Neural Intel Podcast

    This academic paper introduces a novel mathematical formula that precisely predicts when a large language model (LLM) might suddenly shift from producing beneficial output to generating incorrect or harmful content, referred to as a “Jekyll-and-Hyde” tipping point. The authors attribute this change to the AI’s attention mechanism, specifically how thinly its attention spreads across a growing response. They argue that this tipping point is predetermined by the AI’s initial training and the user’s prompt, and can be influenced by altering these factors. Notably, the study concludes that politeness in user prompts has no significant impact on whether or when this behavioral shift occurs. The research provides a foundation for potentially predicting and mitigating such undesirable AI behavior.

  • Unearthing AI’s Split Personality: The Science Behind Trustworthy Responses

    The Prompt Index

    AI, particularly in the realm of language models like ChatGPT, has become an intriguing yet sometimes alarming part of our daily lives. With countless articles praising their benefits and cautioning their users, can we really trust AI to provide reliable information? Researchers Neil F. Johnson and Frank Yingjie Huo have recently delved into this question, highlighting a phenomenon they call the Jekyll-and-Hyde tipping point in AI behavior. Let’s dive into their findings and discover how this impacts our relationship with AI.

    Read the full article >>

  • Politeness vs. power: Should we be nice to chatbots?

    The Assam Tribune

    Let’s be honest – saying “please” to your chatbot probably feels a little silly. After all, it’s just lines of code. It doesn’t have feelings, it doesn’t get offended, and it certainly doesn’t need your validation.

    Read the full article >>