Anthropic, OpenAI Turn To Religious Leaders In Quest To Make ‘Ethical’ AI Models

Anthropic, OpenAI Turn To Religious Leaders In Quest To Make ‘Ethical’ AI Models

Representatives from companies like Anthropic and OpenAI met leaders from various religious groups, including Sikhs, Hindus, and Buddhists, during the inaugural “Faith-AI Covenant” roundtable in New York to discuss ways of infusing morality and ethics into AI technology

Rakshit KumarUpdated: Monday, May 11, 2026, 04:27 PM IST
article-image

In a reversal of Silicon Valley’s sceptical stance toward organised religion, leaders in artificial intelligence are now turning to religious figures to help instil moral values in emerging technology.

Representatives from companies like Anthropic and OpenAI held meetings with leaders from various religious groups, including Sikhs, Hindus, and Buddhists, during the inaugural “Faith-AI Covenant” roundtable in New York to discuss ways of infusing morality and ethics into AI technology, the Associated Press reported.

The roundtable was organised by the Geneva-based Interfaith Alliance for Safer Communities. The alliance seeks to address issues such as extremism, radicalisation, and human trafficking.

Similar roundtables are expected to be organised across the world, including in Beijing, Nairobi, and Abu Dhabi.

The initiative comes amid growing public concerns about the ethical and moral behaviour of AI technology in various situations.

According to the report, the roundtable was attended by representatives from the Hindu Temple Society of North America, the Baha’i International Community, The Sikh Coalition, the Greek Orthodox Archdiocese of America, and The Church of Jesus Christ of Latter-day Saints.

Advocates of such discussions argue that inconsistent regulations for AI technology may not keep pace with its rapid development.

They believe that if companies can introduce a moral compass into future AI models, the risk of misuse could be reduced.

The goal of the initiative is to develop a set of norms or principles informed by different groups and faiths that companies would follow.

Anthropic is said to be among the most active AI companies in engaging with faith leaders to develop morally responsible AI.

“We want Claude to do what a deeply and skillfully ethical person would do in Claude’s position,” Anthropic states in the public “Claude Constitution” written for its chatbot. That constitution was created with help from various religious and ethics leaders.

Recently, the company also refused to allow the United States government to use its AI models for developing autonomous weapons or conducting mass surveillance of citizens.

However, questions are also being raised about whether ethical AI technology can truly be developed using faith-based guidance and, more broadly, what ethical AI actually means.