In God We Trust, In AI We Ask
Religiosity and Moral Advice Seeking in the Age of Generative AI
Abstract
As AI chatbots increasingly enter domains long considered uniquely human, such as moral guidance, questions arise about how they intersect with traditional frameworks like religion. While it is commonly assumed that religious individuals would resist AI intrusion in the moral sphere, we found the opposite. Across two pre-registered studies with a stratified U.S. sample (N = 695), both self-reported religious engagement and religious belief consistently predict greater openness to seeking moral advice from AI systems. Parallel mediation models indicate that this relationship is mediated primarily by a broader disposition to seek moral guidance from multiple sources, and secondarily by perceived authority of AI as moral advisors. Rather than shielding individuals from AI’s appeal in the moral domain, religiosity may systematically facilitate it. These findings carry broad societal implications for AI-mediated moral guidance, a new challenge requiring coordinated attention from technologists, faith communities, and policymakers alike.
Quick Links
- Introduction — Literature review and study overview
- Q1: Self-Reported Religiosity — Does self-reported religiosity predict AI moral advice seeking?
- Q2: Religious Behavior Score — Does Religious Behavior Score predict AI moral advice frequency?
- Q3: Mediators — What Mediates the Relationship Between Religiosity and the Frequency of Seeking Moral Advice from AI Chatbots?
- Q4: Moderation — Does Access to Sources of Moral Advice Moderate the Relationship between Religiosity and Seeking Moral Advice from AI Chatbots?
- Discussion — General discussion, Implication and Limitation
- Supplementary — Additional analyses