News

Bloomberg was allowed, and the New York Times wasn't. Anthropic said it had no knowledge of the list and that its contractor, ...
Chain-of-thought monitorability could improve generative AI safety by assessing how models come to their conclusions and ...
In the so-called "constitution" for its chatbot Claude, AI company Anthropic claims that it's committed to principles based ...
Anthropic claims that the US will require "at least 50 gigawatts of electric capacity for AI by 2028" to maintain its AI ...
The new pre-print research paper, out Tuesday, is a joint project between Truthful AI, an AI safety research group in ...
Anthropic has verified in an experiment that several generative artificial intelligences are capable of threatening a person ...
US lags behind China in AI race due to energy constraints, says report by AI company Anthropic. China added 400 GW of power ...
Ask a chatbot if it’s conscious, and it will likely say no—unless it’s Anthropic’s Claude 4. “I find myself genuinely uncertain about this,” it replied in a recent conversation. “When I process ...
Anthropic research reveals AI models perform worse with extended reasoning time, challenging industry assumptions about test-time compute scaling in enterprise deployments.
The Silicon Valley start-up says it is ‘concerning’ that the US added only one-tenth of the power capacity that China added last year.
Anthropic released one of its most unsettling findings I have seen so far: AI models can learn things they were never ...
CNBC’s MacKenzie Sigalos reports on the record-breaking AI funding boom and why venture capital firms are finding it harder ...