With 900 million weekly active ChatGPT users in early 2026 (OpenAI 2026), AI chatbots1 are rising to prominence among the most widely-used tools from which people seek guidance on consequential decisions. From technical assistance to social decisions like dating (Cunningham 2025), investment (Alves 2025), and hiring (Gorelick 2025), artificial intelligence is becoming increasingly embedded in domains once reserved for human judgment. More recently, AI has entered explicitly religious contexts, with millions interacting with AI chatbots framed as spiritual guides or even deities to explore religious moral teachings or confessing their moral failings(L. Jackson 2025; Kuper 2025)

At first glance, this phenomenon might seem strange for many reasons. First, religious institutions have been vocally resistant to AI in faith-based practice. In that vein, the Vatican cautioned against substituting God for artifacts (Fernández et al. 2025), Pope Leo XIV (2026) warned against relying on AI oracle, and an AI “priest” hosted on Catholic.com was promptly stripped of its clerical title within two days of its launch following public backlash (Hoopes 2024). Second, previous studies in human-robot interaction suggest exposure to embodied AI (e.g., robot preachers) undermines religious commitment (J. C. Jackson et al. 2023). Taken together, these findings might suggest that morality is a domain that ought to be reserved exclusively for human judgement, and in accord with that view, research shows that people view machine-mediated moral decisions as potentially problematic (Bigman and Gray 2018). Yet, at the very least, the popularity of religious AI chatbots suggests that resistance and openness might well coexist within religious populations. Rather than treating this as a simple contradiction, we treat it as a theoretically informative asymmetry. The same features of religiosity that generate wariness toward AI chatbots may, through distinct psychological pathways, also generate openness to AI moral counsel. The present paper aims to begin identifying those pathways.

May the Very Nature of Religion Forster Openness to Advice from a Wide Array of Sources, Including Artificial Intelligence?

Here, we consider that religion may shape openness to AI guidance in ways that go beyond doctrinal content alone and that defy conventional wisdom on how people tend to view AI’s infiltration into human social affairs. Across traditions, religion often involves recurring engagement with questions of meaning (C. Smith 2017), morality (McKay and Whitehouse 2015; Baumsteiger, Chenneville, and McGuire 2013; Duriez and Soenens 2006), and authority (Gifford 2005), while also cultivating psychological orientations such as humility (Aghababaei et al. 2016), openness to wisdom (White, Mosley, and Solomon 2024), anthropomorphic thinking (Guthrie 2021), and sensitivity to agency (Boyer 2001). These features of religious life may make some believers especially receptive to seeking guidance from sources that appear knowledgeable, responsive, and morally relevant, including AI chatbots. As such, and in contrast to the recent AI-resistance espoused by formal religious institutions (e.g., Fernández et al. 2025; Leo XIV 2026) and the budding documentation of widespread psychological resistance to AI systems occupying increasingly complex social roles (Rubin et al. 2025; Wenger, Cameron, and Inzlicht 2026), we predict that religiosity itself may predict openness to AI advice positively, rather than negatively. We stake this prediction partly on the basis of what religion teaches, and partly on the basis of the habits of mind and social cognition it tends to foster. Below, we outline a number of candidate mechanisms that may support a positive relationship between religiosity and AI advice seeking.

Religion and the Disposition to Moral Consultation

Religions across traditions often involve structured moral deliberation, and engagement with ethical questions as central practices that shape communal life and individual identity (McKay and Whitehouse 2015; C. Smith 2017). Religious rituals, texts, and communities provide settings where believers reflect on moral norms, deliberate about right action, and seek guidance from diverse sources such as clergy, scripture, and self-reflection (Baumsteiger, Chenneville, and McGuire 2013; Gifford 2005). These practices make consulting multiple sources of moral guidance a routine part of religious life. This disposition toward moral consultation may incline religious believers to seek moral advice from AI, much as they consult clergy or family. We therefore expect religiosity to predict greater openness to AI moral advice as it fosters a broader tendency to seek moral guidance in general and from diverse sources. Accordingly, we examine both the frequency of seeking moral advice from 13 non-AI, non-religious sources and general interest in seeking such advice across those same sources as potential mediators of this association.

Religion and Epistemic Humility

Epistemic humility, intellectual curiosity, and openness to higher wisdom are valued across many religious traditions and may have implications for where believers seek guidance (Porter et al. 2016). Although the relationship between religiosity and intellectual humility varies (Choe et al. 2024), religious commitment is associated with Honesty-Humility, Agreeableness, and Conscientiousness, alongside a greater orientation toward virtuous self-development and receptivity to moral learning (Aghababaei et al. 2016). These dispositions may lower resistance to consulting novel or unconventional sources outside the self, provided they are perceived as potentially informative or wisdom-bearing. Other evidence similarly suggests that adults from diverse religious backgrounds attribute greater moral goodness to people who display curiosity about both religious and scientific topics (White, Mosley, and Solomon 2024), consistent with the possibility that religious individuals may value openness in the pursuit of truth. On this basis, religiosity may predict greater interest in seeking moral advice from novel sources, including AI, partly because religious life fosters a general disposition toward humble, curious, and guidance-oriented engagement with morally relevant input.

Religion and Moral Objectivism

A related candidate mechanism concerns beliefs about the nature of morality itself. Many religious believers understand morality as grounded in objective and universal truths rather than something constructed or shaped solely by context, relationships, or personal perspective. Consistent with this view, 61% of highly religious Americans endorsed moral objectivism (G. A. Smith et al. 2025). When morality is seen as objective, the epistemic channel assessing that moral truth may matter less than the source’s capability to convey it. A machine, clergy member, or scripture may all serve as potentially viable conduits of an underlying moral reality. This contrasts with the situational or relational views, where the moral advisor’s identity, credibility, and social position determine guidance’s validity or meaning (Hohenberg and Guess 2023). Moral objectivists, by this logic, may face a lower psychological barrier to considering AI as an acceptable source of moral input, not necessarily because they trust AI more, but because they trust the underlying moral structure that any (reliable) source might reflect.

Religiosity and Positive Perceptions of Artificial Intelligence

“In a world without gods, there is plenty of room for substitutes in the marketplace (Geraci 2024, 299).” The parallels between AI and religious deities may offer another route through which religiosity fosters openness to AI moral guidance. Like deities, generative AI chatbots possess features that seem quasi-omniscient, quasi-omnipotent, and disembodied. They are trained on vast corpora of knowledge, remain continuously available, are relatively indefatigable, and operate from an invisible but ever-present digital “cloud”. Interactions with them are often private, immediate, nonjudgmental, and marked by expressed empathy (Ovsyannikova, Mello, and Inzlicht 2025). They are also perceived as capable of engaging moral questions competently (Dillion et al. 2025), and users sometimes report feeling “more heard” by them than by other humans (Yin, Jia, and Wakslak 2024). In religious traditions that encourage ongoing dialogue with a higher power and continuous pursuit of wisdom, these qualities may make AI chatbots feel like intuitively plausible partners for moral reflection (C. Smith 2017). We therefore propose that religiosity may be associated with more favorable general perceptions of AI chatbots, including greater perceived authority of AI on moral matters, and more positive evaluations of chatbots as sources of good moral advice. These perceptions may, in turn, serve as candidate mechanisms linking religiosity to openness to AI moral advice.

Religiosity and Anthropomorphism

Religion has long been characterized by a tendency to perceive the world in humanlike terms, especially by attributing agency, intention, and mind to unseen or ambiguous forces (Guthrie 2021). Correspondingly, in several analyses of basic worldviews, religion and spirituality were most strongly associated with perceiving the world as “alive” (Clifton et al. 2019; Kerry, Lin, and Clifton 2025). More broadly, anthropomorphism is thought to arise from the accessibility of human-centered knowledge, the motivation to understand and predict other agents, and the desire for social connection (Epley, Waytz, and Cacioppo 2007), processes relevant to both deities and AI chatbots, which now display an unprecedented degree of conversational humanlikeness. We propose that religious individuals may be especially likely to extend these tendencies to AI. In particular, religion is associated with a “hypertrophy of social cognition” (Boyer 2001), or a heightened readiness to infer agency and intention in ambiguous contexts, as well as with mind-body dualist intuitions that make it easier to conceive of minds as existing apart from physical bodies (Bloom 2007). Together, these tendencies may make AI chatbots seem more mind-like, agentic, and socially meaningful to religious individuals, thereby increasing openness to seeking moral advice from them.

Studies 1 and 2

Building on the literature outlined above, the present research examines whether religiosity predicts greater openness to seeking moral advice from AI chatbots, and whether this relationship operates through multiple psychological mechanisms. In Study 1, we assessed candidate mediators including the disposition to seek moral consultation from various sources, epistemic humility, moral objectivism, and positive perceptions of AI chatbots as moral advisors. In Study 2, we extended the model by focusing on the tendency to anthropomorphize AI chatbots as an additional candidate mechanism, alongside other personality traits, including fear of negative judgment, self-reflective tendencies and deference to authority as secondary potential mechanisms through which religious people may be more likely to seek moral advice from AI chatbots.

Together, these studies advance a counterintuitive claim. Namely, religiosity, far from insulating individuals from AI’s entrance to the moral domain, may systematically facilitate it. If AI chatbots can influence the moral reasoning of religious individuals, the implications extend well beyond psychology. At the societal level, widespread AI-mediated moral guidance could reshape the role of institutional religion, alter the texture of communal life, and redefine individual moral identity. These stakes make the question of how AI systems are designed, deployed, and regulated in morally contested contexts not merely a technical one, but a matter of public policy, one that will require coordinated attention from technology companies, faith communities, and policymakers alike.

References

Aghababaei, Naser, Agata Błachnio, Akram Arji, Masoud Chiniforoushan, Mustafa Tekke, and Alireza Fazeli Mehrabadi. 2016. “HonestyHumility and the HEXACO Structure of Religiosity and Well-Being.” Current Psychology 35 (3): 421–26. https://doi.org/10.1007/s12144-015-9310-5.
Alves, Joice. 2025. “’ChatGPT, What Stocks Should I Buy?’ AI Fuels Boom in Robo-Advisory Market.” Reuters, September. https://www.reuters.com/business/finance/chatgpt-what-stocks-should-i-buy-ai-fuels-boom-robo-advisory-market-2025-09-25/.
Baumsteiger, Rachel, Tiffany Chenneville, and Joseph F. McGuire. 2013. “The Roles of Religiosity and Spirituality in Moral Reasoning.” Ethics & Behavior 23 (4): 266–77. https://doi.org/10.1080/10508422.2013.782814.
Bigman, Yochanan E., and Kurt Gray. 2018. “People Are Averse to Machines Making Moral Decisions.” Cognition 181 (December): 21–34. https://doi.org/10.1016/j.cognition.2018.08.003.
Bloom, Paul. 2007. “Religion Is Natural.” Developmental Science 10 (1): 147–51. https://doi.org/10.1111/j.1467-7687.2007.00577.x.
Boyer, Pascal. 2001. Religion explained: the evolutionary origins of religious thought. New York: Basic Books.
Choe, Elise J. Y., Stephen Waldron, Choi Hee an, and Steven J. Sandage. 2024. “Intellectual Humility and Religion/Spirituality: A Scoping Review of Research.” The Journal of Positive Psychology 19 (4): 611–28. https://doi.org/10.1080/17439760.2023.2239792.
Clifton, Jeremy D. W., Joshua D. Baker, Crystal L. Park, David B. Yaden, Alicia B. W. Clifton, Paolo Terni, Jessica L. Miller, et al. 2019. “Primal World Beliefs.” Psychological Assessment 31 (1): 82–99. https://doi.org/10.1037/pas0000639.
Cunningham, Mary. 2025. “Unlucky in Love? AI Dating Apps Promise to Help You up Your Game.” CBS News, July. https://www.cbsnews.com/news/ai-dating-assistants-rizz-keepler-hinge-grindr/.
Dillion, Danica, Debanjan Mondal, Niket Tandon, and Kurt Gray. 2025. “AI Language Model Rivals Expert Ethicist in Perceived Moral Expertise.” Scientific Reports 15 (1): 4084. https://doi.org/10.1038/s41598-025-86510-0.
Duriez, Bart, and Bart Soenens. 2006. “Religiosity, Moral Attitudes and Moral Competence: A Critical Investigation of the Religiosity-Morality Relation.” International Journal of Behavioral Development 30 (1): 76–83. https://doi.org/10.1177/0165025406062127.
Epley, Nicholas, Adam Waytz, and John T. Cacioppo. 2007. “On Seeing Human: A Three-Factor Theory of Anthropomorphism.” Psychological Review 114 (4): 864–86. https://doi.org/10.1037/0033-295X.114.4.864.
Fernández, Víctor Manuel, José Tolentino de Mendonça, Armando Matteo, and Paul Tighe. 2025. “Antiqua Et Nova. Note on the Relationship Between Artificial Intelligence and Human Intelligence.” https://www.vatican.va/roman_curia/congregations/cfaith/documents/rc_ddf_doc_20250128_antiqua-et-nova_en.html.
Geraci, Robert M. 2024. “Artificial Canon: AI and the Transformation of Religion.” Current History 123 (856): 296301. https://online.ucpress.edu/currenthistory/article-abstract/123/856/296/203595.
Gifford, Paul. 2005. “Religious Authority: Scripture, Tradition, Charisma.” In. Routledge.
Gorelick, Evan. 2025. “Recruiters Use AI to Scan Résumés. Applicants Are Trying to Trick It.” The New York Times, October. https://www.nytimes.com/2025/10/07/business/ai-chatbot-prompts-resumes.html.
Guthrie, Stewart Elliott. 2021. “Religion as Anthropomorphism: A Cognitive Theory.” In, edited by James R. Liddle and Todd K. Shackelford, 0. Oxford University Press. https://doi.org/10.1093/oxfordhb/9780199397747.013.6.
Hohenberg, Bernhard Clemm von, and Andrew M. Guess. 2023. “When Do Sources Persuade? The Effect of Source Credibility on Opinion Change.” Journal of Experimental Political Science 10 (3): 328–42. https://doi.org/10.1017/XPS.2022.2.
Hoopes, Tom. 2024. “AI Priest Fr. Justin Absolved Sinners and Served God. How Did This Happen?” https://media.benedictine.edu/ai-priest-fr-justin-abolved-sinners-how-did-this-happen.
Jackson, Joshua Conrad, Kai Chi Yam, Pok Man Tang, Ting Liu, and Azim Shariff. 2023. “Exposure to Robot Preachers Undermines Religious Commitment.” Journal of Experimental Psychology: General 152 (12): 3344–58. https://doi.org/10.1037/xge0001443.
Jackson, Lauren. 2025. “Finding God in the App Store.” The New York Times, September. https://www.nytimes.com/2025/09/14/us/chatbot-god.html.
Kerry, Nicholas, Hsiao-Jui Lin, and Jeremy D. W. Clifton. 2025. “Belief That the World Needs Me Statistically Explains the Relationship Between Religiosity and Subjective Wellbeing Among Americans.” The Journal of Positive Psychology 0 (0): 1–12. https://doi.org/10.1080/17439760.2025.2587058.
Kuper, Simon. 2025. “People Are Using AI to Talk to God.” BBC News, October. https://www.bbc.com/future/article/20251016-people-are-using-ai-to-talk-to-god.
Leo XIV. 2026. “Message of His Holiness Pope Leo XIV for the 60th World Day of Social Communications.” http://www.vatican.va/content/leo-xiv/en/messages/communications/documents/20260124-messaggio-comunicazioni-sociali.html.
McKay, Ryan, and Harvey Whitehouse. 2015. “Religion and Morality.” Psychological Bulletin 141 (2): 447–73. https://doi.org/10.1037/a0038455.
OpenAI. 2026. “Scaling AI for Everyone.” https://openai.com/index/scaling-ai-for-everyone/.
Ovsyannikova, Dariya, Victoria Oldemburgo de Mello, and Michael Inzlicht. 2025. “Third-Party Evaluators Perceive AI as More Compassionate Than Expert Humans.” Communications Psychology 3 (1). https://doi.org/10.1038/s44271-024-00182-6.
Porter, Steven L., Anantanand Rambachan, Abraham Vélez de Cea, Dani Rabinowitz, Stephen Pardue, Jackson, and Sherman. 2016. “Religious Perspectives on Humility.” In. Routledge.
Rubin, Matan, Joanna Z. Li, Federico Zimmerman, Desmond C. Ong, Amit Goldenberg, and Anat Perry. 2025. “Comparing the Value of Perceived Human Versus AI-Generated Empathy.” Nature Human Behaviour 9 (11): 2345–59. https://doi.org/10.1038/s41562-025-02247-w.
Smith, Christian. 2017. Religion: What It Is, How It Works, and Why It Matters. Princeton: Princeton University Press.
Smith, Gregory A., Alan Cooperman, Becka A. Alper, Besheer Mohamed, Chip Rotolo, Patricia Tevington, Justin Nortey, Asta Kallo, Jeff Diamant, and Dalia Fahmy. 2025. “Decline of Christianity in the u.s. Has Slowed, May Have Leveled Off.” https://doi.org/10.58094/4kqq-3112.
Wenger, Joshua D., C. Daryl Cameron, and Michael Inzlicht. 2026. “People Choose to Receive Human Empathy Despite Rating AI Empathy Higher.” Communications Psychology 4 (1). https://doi.org/10.1038/s44271-025-00387-3.
White, Cindel J. M., Ariel J. Mosley, and Larisa Heiphetz Solomon. 2024. “Adults Show Positive Moral Evaluations of Curiosity About Religion.” Social Psychological and Personality Science 15 (6): 670–81. https://doi.org/10.1177/19485506231195915.
Yin, Yidan, Nan Jia, and Cheryl J. Wakslak. 2024. “AI Can Help People Feel Heard, but an AI Label Diminishes This Impact.” Proceedings of the National Academy of Sciences 121 (14): e2319112121. https://doi.org/10.1073/pnas.2319112121.

Footnotes

  1. Throughout this manuscript, the terms “artificial intelligence,” “AI,” “AI chatbots,” and “AI systems” refer primarily to LLM-based conversational chatbots, as reflected in our measures. We recognize that artificial intelligence encompasses a far broader range of technologies; however, we focus on chatbot-based interaction because, for much of the public in 2024–2026, AI chatbots represent the most salient and familiar face of artificial intelligence.↩︎