Claude developer draws the line at autonomous weapons, mass surveillance
Pro
Dario Amodei, Anthropic
Anthropic’s decision to give ethical considerations precedence over potential military contracts has sparked a debate on the suitability of artificial intelligence for warfare.
Although Anthropic’s chatbot, Claude, has surged in popularity among consumers, the company has been fined by the government because it will not allow the Pentagon to use its technology for autonomous weapons and domestic surveillance. The company plans to challenge these penalties in court.
“We held to our exceptions for two reasons. First, we do not believe that today’s frontier AI models are reliable enough to be used in fully autonomous weapons. Allowing current models to be used in this way would endanger America’s warfighters and civilians. Second, we believe that mass domestic surveillance of Americans constitutes a violation of fundamental rights,” said Anthropic CEO Dario Amodei in a statement.
Experts, such as Missy Cummings, a former Navy combat pilot, welcomed Anthropic’s ethical stance but criticised the AI industry for overselling the capabilities of these technologies. They argue that large language models are prone to errors and are not reliable enough for high‑risk situations such as warfare.
Cummings underlined the need for human oversight in every use of AI for military purposes. She also points to the importance of verification and careful consideration.
She set Anthropic’s position against the message from other AI companies that suggest their technology is close to achieving consciousness. The situation has highlighted the potential downsides of rushing to deploy AI in military applications without fully understanding the risks.
Anthropic’s stance has also struck a chord with consumers. This has led to a sharp increase in downloads of Claude, at the expense of OpenAI’s ChatGPT. OpenAI recently announced an agreement with the Pentagon to replace Anthropic in classified environments.
Sam Altman, CEO of OpenAI, has acknowledged that the company’s decision was rushed and that its communication could have been better.
He says the ethical implications of AI must be considered carefully. He also said it was important to work with stakeholders such as the Pentagon to develop appropriate safeguards.
Business AM


