Catholic leaders urge moral restraint as U.S. defense officials clash with Anthropic over refusal to let AI power autonomous weapons.
Newsroom (17/02/2026 Gaudium Press )A widening rift between Silicon Valley and Washington has placed the question of artificial intelligence squarely in the moral spotlight. Anthropic, the San Francisco AI company behind the Claude platform, is now at the center of an escalating dispute with U.S. Secretary of Defense Pete Hegseth and the newly renamed War Department. The conflict stems from Anthropic’s refusal to allow its technology to be used in designing or deploying lethal autonomous weapons systems—commonly known as “killer robots.”
For the Catholic Church, the moment is a test of moral clarity. The Vatican has warned repeatedly that AI must serve humanity, not replace it. In this newest controversy, the Church’s call echoes more urgently than ever.
A Clash of Principles
In late January, Anthropic published a new “Constitution” governing the ethical use of its Claude AI models. The document sets clear boundaries, among them the refusal to facilitate surveillance of U.S. citizens or develop weapons capable of acting without human oversight. This commitment to “appropriate human mechanisms” of control now places the company in direct conflict with the Pentagon’s new Artificial Intelligence Acceleration Strategy.
Secretary Hegseth, himself a Catholic, has framed the issue in starkly pragmatic terms, rejecting what he calls “utopian idealism” in favor of “hard-nosed realism.” His January memorandum directs defense officials to remove all “ideological tuning” from AI systems that, in his view, might obstruct lawful military use. Yet what constitutes “lawful use” remains an unsettled—and deeply ethical—question.
For its part, the Pentagon has hinted that Anthropic may soon be added to its “supply chain risk” list, a designation typically reserved for foreign adversaries. Such a move could force major U.S. firms—including Amazon and OpenAI, both significant investors in Anthropic—to sever ties or risk losing defense contracts. A senior Pentagon official bluntly told reporters the process “will be an enormous pain,” but vowed that the company would “pay a price” for defying military priorities.
The Vatican’s Warning against “Killer Robots”
The Holy See has long viewed lethal autonomous weapons as a profound threat to the moral order. Its diplomats have for years urged the international community to enact binding limits on the weaponization of AI. In a 2024 address to the United Nations, the Vatican reiterated its demand for a moratorium on the development and deployment of autonomous killing systems.
Pope Leo XIV, in continuity with his predecessors, has placed AI within the framework of Catholic social teaching. In his first “state of the world” address this January, the pontiff warned that humanity is entering a new arms race—“the production of ever more sophisticated new weapons, also by means of artificial intelligence.” He called instead for “ethical management and regulatory frameworks focused on freedom and human responsibility.”
For Leo XIV, these concerns represent more than policy preferences—they strike to the core of human dignity. By choosing the regnal name “Leo,” he signaled continuity with both St. Leo the Great, who defended the moral order amid imperial decline, and Leo XIII, whose encyclical Rerum novarum first articulated the Church’s modern doctrine on labor and social justice. The new industrial revolution, the pope has said, is technological: it challenges “the defense of human dignity, justice and labor” as profoundly as the first.
The Ethical Demand of Human Agency
Central to Catholic teaching on warfare is the insistence that moral responsibility cannot be outsourced. The Catechism’s prohibition against indiscriminate killing presumes the presence of human judgment—an intentional decision informed by conscience. A machine capable of identifying, targeting, and killing without human intervention nullifies that essential human element.
The Vatican’s “Rome Call for AI Ethics,” first launched under Pope Francis in 2020, underscores this principle. Co-signed by Microsoft, IBM, and the Pontifical Academy for Life, the initiative calls for transparency, accountability, and above all, respect for human dignity in AI design and deployment. In last year’s Rome conference, addressed by Pope Leo XIV, participants including an Anthropic delegation were reminded that technological innovation must serve the “integral development of the human person”—materially, intellectually, and spiritually.
A Moment of Moral Reckoning
The confrontation between Anthropic and the War Department is not merely a policy dispute. It is a sign of the growing moral tension at the heart of modern technological power. The Pentagon argues that national security demands keeping pace with adversaries. The Church argues that security without humanity is no security at all.
As the Vatican presses for a global ban on autonomous weapons and U.S. officials double down on developing them, Catholics are faced with a sobering question: what happens when conscience collides with command?
If there is a line to be drawn between human mastery and mechanical autonomy in warfare, Rome insists it must be drawn clearly—and soon.
- Raju Hasmukh with files from Crux Now

































