Anthropic’s Claude Chatbot Goes Down for Thousands of Users
Anthropic PBC’s artificial intelligence chatbot Claude and related consumer-facing applications went down on Monday, with the startup saying it has been grappling with “unprecedented demand” for its services over the past week.
Nearly 2,000 users had reported Claude AI service disruptions at the outage’s peak around 6:40 a.m. New York time, according to service-monitoring website Downdetector. Complaints had dropped to a third of that by 8:40 a.m., but Anthropic said in a statement by WhatsApp that “consumer-facing surfaces” such as claude.ai and the company’s apps remained unavailable. Businesses that have integrated Claude’s AI models into their own systems are unaffected.
“We appreciate everyone’s patience as we work to bring things back online while experiencing unprecedented demand for Claude over the last week,” Anthropic said in the statement. As of 9:05 a.m. New York time, the company was “continuing to work on a fix,” according to a status update website.
Anthropic has seen a surge in usage of its services as it feuds with the US Defense Department over the potential use of its technology for mass surveillance and the development of autonomous weaponry. The Pentagon has declared Anthropic a supply-chain risk, an unprecedented move against an American company that threatens to have profound consequences for its business. The number of free users of Claude has increased more than 60% since January, and paid subscribers have more than doubled since October, according to Anthropic.
Anthropic has stipulated that its products not be used for surveillance of Americans or to make fully autonomous weapons and said on Friday that “no amount of intimidation or punishment from the Department of War will change our position.” The company vowed to challenge any formal notification that it’s been designated a supply-chain risk in court, and its chief executive officer Dario Amodei called the move “retaliatory and punitive” in an interview with CBS News.
Hours after Anthropic was declared a supply-chain risk, larger rival OpenAI agreed to deploy its own AI models within the Defense Department’s classified network, saying it had reached an agreement that reflects the firm’s principles that prohibit domestic mass surveillance and require “human responsibility for the use of force, including for autonomous weapon systems.”
OpenAI went on to defend its new deal, saying it built a number of safeguards into its contract that will work to ensure its models are used and behave as they should as part of the deployment. But some were already online over the weekend calling on users to cancel their ChatGPT subscriptions as a result of the agreement.
The Claude app has meanwhile topped Apple Inc.’s App Store for several days, and Silicon Valley workers have rallied around the company’s stance.
Photo: Anthropic’s Claude. Photographer: Samyukta Lakshmi/Bloomberg