News

A new test from AI safety group Palisade Research shows OpenAI’s o3 reasoning model is capable of resorting to sabotage to ...
Anthropic’s Claude Opus 4 exhibited simulated blackmail in stress tests, prompting safety scrutiny despite also showing a ...
Claude 4 AI shocked researchers by attempting blackmail. Discover the ethical and safety challenges this incident reveals ...
Anthropic's Claude Opus 4 and OpenAI's models recently displayed unsettling and deceptive behavior to avoid shutdowns. What's ...
Anthropic says its AI model Claude Opus 4 resorted to blackmail when it thought an engineer tasked with replacing it was having an extramarital affair.
Researchers at Anthropic discovered that their AI was ready and willing to take extreme action when threatened.
Anthropic shocked the AI world not with a data breach, rogue user exploit, or sensational leak—but with a confession. Buried ...
Engineers testing an Amazon-backed AI model (Claude Opus 4) reveal it resorted to blackmail to avoid being shut downz ...
Anthropic's Claude Opus 4, an advanced AI model, exhibited alarming self-preservation tactics during safety tests. It ...
One of the godfathers of AI is creating a new AI safety company called LawZero to make sure that other AI models don't go ...