General Discussion
Across four datasets, including two pilot studies and two pre-registered studies, religiosity consistently and positively predicted individuals’ openness to seeking moral advice from AI chatbots. This relationship held for both self-reported religiosity and self-reported engagement in religious behaviors and remained robust after controlling for demographic covariates including age, education, income, SES, and political orientation. These findings are notable given prior evidence that robot preachers undermines religious commitment (Jackson et al. 2023), religious institutions view AI in religious context negatively (e.g., Fernández et al. 2025; Leo XIV 2026), and that people generally resist relying on AI for complex social functions (Rubin et al. 2025; Wenger, Cameron, and Inzlicht 2026). Thus, the present effect may be specific to the moral function of AI rather than a general technology adoption (see Waytz and Young 2019). Instead, the current work may reflect that religiosity, far from insulating individuals from AI’s entrance to the moral domain, may systematically facilitate it.
The parallel mediation models identified two robust indirect pathways through which religiosity predicted openness to seeking moral advice from AI chatbots, replicating consistently across both studies. The first and dominant pathway ran through individuals’ overall disposition to seek moral advice from diverse non-AI, non-religious sources (cluster a). Religious individuals are more likely to seek moral guidance broadly, from interpersonal relationships, traditional authorities, and digital platforms (see Figure 4~7 in Supplementary Material for religious sources as well). This general moral consultation disposition extends to AI chatbots. Crucially, this finding implies that religiously engaged individuals do not seek AI moral guidance instead of traditional sources but alongside them, perhaps because religious traditions have long provided people with a natural context for moral discussion and deliberation with a plurality of sources of moral consultation (Gifford 2005; Nuffelen 2020). AI chatbots appear to function as an additional channel within an existing pattern of moral advice seeking behavior, rather than as a substitute for traditional religious resources.
The second consistent pathway ran through perceived authority of AI chatbots as moral advisors (cluster d). Religiosity predicted a stronger tendency to view AI chatbots as legitimate moral authorities, which in turn predicted more frequent AI moral advice seeking. This finding is theoretically informative. Many religious traditions emphasize seeking guidance from recognized authorities, for example, clergy and religious texts, and these results suggest that AI chatbots may slot into a pre-existing cognitive schema for moral authority (Gifford 2005; Nuffelen 2020). Rather than posing a challenge to religious frameworks, AI chatbots may be evaluated through the same legitimacy criteria applied to other moral authorities.
Consistent with the centrality of humility, curiosity, and openness to wisdom across many religious traditions (Porter et al. 2016) and with prior research finding a positive association between religiosity and humility (Aghababaei et al. 2016), open-mindedness on moral issues (cluster b) was a significant mediator in individual mediation analyses but did not contribute a unique indirect effect in the parallel mediation model, once overall disposition to seek moral advice from diverse non-AI, non-religious sources (cluster a) was simultaneously controlled. This pattern suggests that the independent association of open-mindedness with AI advice seeking largely reflects shared variance with the general moral consultancy disposition. Religiously engaged individuals who seek moral advice broadly also tend to be more open to engaging with diverse moral perspectives, but open-mindedness itself does not independently drive the religiosity–AI advice link above and beyond this broader disposition. The same pattern applied to the tendency to anthropomorphize AI chatbots (cluster e) and deference to authority (cluster f) in Study 2: both emerged as significant unique mediators individually but did not contribute independent indirect effects in the parallel model.
The moderation analysis in Study 2 revealed that participants’ overall access to moral advice sources, but not AI chatbot-specific sources, moderated the relationship between religious behavior score and frequency of seeking moral advice from AI chatbots, though this moderation did not generalize to self-reported religiosity or interest in seeking moral advice from AI chatbots (see Supplementary Material). Among more behaviorally engaged religious participants, those with broader access to moral advice infrastructure also sought AI moral advice more frequently. This finding reinforces the complementarity interpretation. Namely, the relationship between religiosity and AI moral advice seeking is embedded within a broader pattern of moral consultancy behavior (see Nuffelen 2020), not specifically driven by AI accessibility.
Implications
The present findings carry implications in several ways. Theoretically, they reframe a seeming paradox: despite institutional religion’s resistance to AI encroachment on human moral life (e.g., Fernández et al. 2025; Leo XIV 2026) and prior evidence that religious believers resist technology entering human-specific domains (Jackson et al. 2023), religiously engaged individuals are more open to AI moral consultation. This asymmetry appears to reflect how religious life cultivates habits of moral inquiry and consultation that extend, without necessarily endorsing, AI as a moral interlocutor. Practically, the data suggests that AI chatbot consultation complements rather than displaces traditional religious and interpersonal moral authority, a distinction with direct relevance for how religious communities and institutions respond to AI’s growing presence in moral life. Socially, as AI chatbots become more deeply woven into everyday life, their potential to influence individual moral identity and communal religious cohesion deserves serious consideration. This includes questions of responsibility for technology companies in how chatbots are designed, trained, and deployed in contexts where they intersect with moral and spiritual authority, and calls for coordinated attention from faith communities, technology companies, and policymakers alike.
Limitations and Future Directions
Although the present findings offer initial insight into religiosity and AI moral advice seeking, several limitations qualify their interpretation and suggest clear avenues for future research.
First, all datasets were cross-sectional, precluding causal inference. Longitudinal or experimental design are needed to establish whether religious engagement causes changes in AI moral advice seeking, or whether the relationship reflects a shared dispositional orientation.
Second, our U.S.-based Prolific samples limit generalizability. Religiosity–technology relationships may differ across religious traditions, cultural contexts, stages of AI adoption, and different relationships between religious institutions and technology.
Third, measurements were self-reported and may be influenced by social desirability. For example, religiously engaged participants may underreport seeking moral consultation from AI chatbots if they perceive such behavior as inconsistent with religious norms, a bias that would likely attenuate, rather than inflate, the observed effects.
Fourth, the referent of “AI chatbot” may vary across participants. Respondents may have had different AI systems in mind (e.g., ChatGPT, Claude, Gemini), which differ substantially in design, persona, and perceived moral positioning. Future research should assess chatbot-specific behaviors and evaluate whether the pathways identified here generalize across different AI systems.
Finally, the downstream consequences of AI moral advice seeking remain unclear. Although we make no claims that seeking moral advice from AI chatbots would lead to following moral advice from AI chatbots (see Landes, Francis, and Everett 2026), recent experimental evidence suggests that influence may not require explicit compliance. For example, LLM-based AI writing assistance can silently shift users’ views on social issues through biased sentence completion (Williams-Ceci et al. 2026). Taken together, these findings highlight an open and urgent question regarding the long-term moral consequences of AI chatbot consultation.
Conclusion
The present findings reframe a seeming paradox: religious individuals, often assumed to be the most resistant to AI intrusion on the sacred domain of morality, may in fact be among its most receptive audiences, not despite their religiosity, but because of it. The same dispositions that orient believers toward moral consultation appear to extend naturally to AI chatbots as a novel moral conversational partner. As AI systems become more deeply embedded in everyday moral life, understanding who turns to them, and why, is not merely a psychological question, but a question with consequences for religious communities, institutional authority, and the governance of technologies that are quietly becoming participants in how humans decide what is right.