News

Anthropic retired its Claude 3 Sonnet model. Several days later, a post on X invited people to celebrate it: "if you're ...
Malicious traits can spread between AI models while being undetectable to humans, Anthropic and Truthful AI researchers say.
Roughly 200 people gathered in San Francisco on Saturday to mourn the loss of Claude 3 Sonnet, an older AI model that ...
AI is a relatively new tool, and despite its rapid deployment in nearly every aspect of our lives, researchers are still ...
Anthropic partners with the U.S. government to offer AI tools like Claude for as little as $1, enhancing national security ...
Anthropic found that pushing AI to "evil" traits during training can help prevent bad behavior later — like giving it a ...
Vibe coding, or the act of using an AI assistant to help develop software and applications, has seen rapid adoption during 2025, and vibe coding platforms, many of which are powered by Anthropic’s ...
For the past year, a dark horse contestant has been quietly racking up wins in student hacking competitions: Claude. Why it ...
Artificial intelligence (AI) models from OpenAI, Google and Anthropic have been added to a government purchasing system, ...
The US government’s central purchasing arm is adding OpenAI, Alphabet Inc.’s Google and Anthropic to a list of approved ...
In the paper, Anthropic explained that it can steer these vectors by instructing models to act in certain ways -- for example ...
The companies will see their AI tools offered via a new federal contracting platform, the Multiple Award Schedule (MAS), ...