News

Ask a chatbot if it’s conscious, and it will likely say no—unless it’s Anthropic’s Claude 4. “When I process complex ...
As large language models like Claude 4 express uncertainty about whether they are conscious, researchers race to decode their ...
But Ravi Mhatre of Lightspeed Venture Partners, a big Anthropic backer, says that when models one day go off the rails, the ...
Anthropic's research reveals AI models can unknowingly transfer hidden behaviors through seemingly meaningless data, raising ...
Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.
In a paper, Anthropic researchers said they developed auditing agents that achieved “impressive performance at auditing tasks, while also shedding light on their limitations.” The researchers stated ...
New research reveals that longer reasoning processes in large language models can degrade performance, raising concerns for AI safety and enterprise use.
Yet that, more or less, is what is happening with the tech world’s pursuit of artificial general intelligence ( AGI ), ...
Anthropic study finds that longer reasoning during inference can harm LLM accuracy and amplify unsafe tendencies.
Attempts to destroy AI to stop a superintelligence from taking over the world are unlikely to work. Humans may have to ...
Anthropic released one of its most unsettling findings I have seen so far: AI models can learn things they were never ...
The major LLMs today are legal landmines, providing no visibility into training data that may violate copyrights, patents, ...