Catholic thinkers back Anthropic in court against Pentagon retaliation over AI use in lethal weapons and mass surveillance programs.
Newsroom (18/03/2026 Gaudium Press ) A group of fourteen prominent Catholic thinkers has entered a legal and moral confrontation that sits at the heart of America’s struggle to define the ethics of artificial intelligence. Their intervention comes as San Francisco-based AI developer Anthropic PBC sues the U.S. Department of War, accusing the Pentagon of retaliation for the company’s refusal to allow its technology to be used for mass surveillance of U.S. citizens or for lethal autonomous weapon systems (LAWS).
The case, filed in both the U.S. District Court for the Northern District of California and the D.C. federal appeals court, contends that the government’s actions violate Anthropic’s First Amendment right to expression and Fifth Amendment right to due process. The Pentagon, meanwhile, has slapped Anthropic with a “supply chain risk” designation—a label typically reserved for foreign adversaries—voiding a $200 million defense contract and setting off a national debate on the limits of ethical compliance in an age of rapid AI militarization.
Drawing the Moral Line on “Machines That Kill”
At the center of the dispute are lethal autonomous weapons systems—machines capable of identifying and engaging targets without human intervention. The Catholic Church, consistent in its opposition to such technologies for more than two decades, regards them as a violation of the moral principle that war must remain a profoundly human act requiring conscience, judgment, and accountability.
“War is a human activity,” said Charles Camosy, a moral theologian at the Catholic University of America and one of the principal authors of the amicus curiae brief filed on Anthropic’s behalf. “Deadly actions in war require human beings to be the ones morally responsible—and to take moral responsibility—for them to be just.”
Camosy’s position is echoed by Joseph Vukov, a philosophy professor at Loyola University Chicago. “By shifting lethal decision-making from humans to machines,” Vukov explained, “LAWS make the assignment of moral responsibility murky. If no human is involved in a poor decision made by a machine, whom do you blame?”
These arguments form the ethical heart of the Catholic scholars’ brief, which frames the issue as “a narrow but consequential dispute about whether a developer of advanced AI systems may maintain principled limits on certain uses of its technology.”
Anthropic’s Refusal and the Pentagon’s Response
Founded by Dario Amodei, Anthropic has positioned itself as a leading voice in AI safety and ethics. The company’s refusal to license its systems for fully autonomous weapons or domestic mass surveillance is grounded, Amodei said in February, in “a commitment to upholding minimal standards of ethical conduct in technical progress.”
“We will not knowingly provide a product that puts America’s warfighters and civilians at risk,” Amodei declared, warning that AI-driven surveillance tools could “assemble an all-encompassing picture of anyone’s life automatically and at massive scale,” creating a reality incompatible with democratic society.
In contrast, federal experts and many in the defense establishment argue that the government, not private companies, must determine what technology to develop and how to use it. Still, the Pentagon’s invocation of a supply chain risk designation—a mechanism meant for entities deemed threats to national security—has startled both constitutional scholars and the business community.
Anthropic is now the first U.S.-based company to bear that label. The Department of War has already voided its contracts and signaled that companies working with Anthropic could face a stark choice: partner with the Pentagon, or continue business with Anthropic.
Catholic Ethics and the Question of Privacy
Beyond warfare, the Catholic thinkers also challenge the Pentagon’s ambitions for AI-powered mass surveillance, calling it an affront to human dignity and the right to privacy. Though Catholic teaching acknowledges privacy is not an absolute right, the scholars argue that “mass surveillance by the Department of War clearly oversteps privacy as described in Catholic thought.”
Their warning echoes growing unease across the political spectrum. Dean Ball, a senior fellow at the Foundation for American Innovation and former AI policy adviser, recently noted that AI allows governments to process and correlate huge amounts of data that were once effectively unmanageable. “AI gives them that infinitely scalable workforce,” Ball said on The Ezra Klein Show. “Every law can be enforced to the letter with perfect surveillance over everything. And that’s a scary future.”
Amodei agrees: “AI-driven mass surveillance presents serious, novel risks to our fundamental liberties. If it remains legal today, that is only because the law hasn’t yet caught up to AI’s capabilities.”
Aligning Principles, Different Futures
The alliance between Anthropic and its Catholic allies is not without its internal tensions. Brian Green, a technology ethicist at Santa Clara University and one of the brief’s signatories, acknowledges that while both parties currently share common cause, their philosophies may one day diverge. “Anthropic takes ethics very seriously,” Green said, “but the company does not completely rule out AI’s future use in lethal weapons. The Church does.”
Yet for now, they march together. As Brian Boyd, another author of the amicus brief, put it: “When an imperfect corporation is willing to forego profit and undertake risk to stand up for basic principles of prudence and privacy, everyone in a position to speak up in their defense ought to do so.”
The court’s decision—expected later this year—could mark a turning point in how the United States balances technological power, corporate conscience, and constitutional rights. Whether guided by law, ethics, or faith, the outcome will help shape the moral boundaries of AI in war and peace alike.
- Raju Hasmukh with files from Crux Now
