Question 3 — What Mediates the Relationship Between Religiosity and the Frequency of Seeking Moral Advice from AI Chatbots?

To identify mechanisms driving the relationship between religiosity and AI moral advice seeking, we examined several potential mediators (Table 2). We first conducted regression analyses (simple regression and multiple regression models adjusted for demographic covariates) to establish which mediators were significantly predicted by Religious Behavior Score. We then restricted mediation analyses to those reaching significance in at least one path-a model. Finally, we estimated a parallel mediation model for each study using structural equation modeling. To minimize criterion contamination among multiple mediators within the same cluster, we selected the mediator that accounted for the largest proportion of variance mediated from each significantly predicted cluster.

Table 1: Study 1: Religious Behavior Score Predicting Each Potential Mediator

Model 1

Model 2

Study

b

SE

95% CI LL

95% CI UL

p

b

SE

95% CI LL

95% CI UL

p

Overall Frequency of Seeking Moral Advice

0.26

0.03

0.19

0.32

< .001

0.23

0.04

0.15

0.31

< .001

Overall Interest in Seeking Moral Advice

0.21

0.03

0.14

0.27

< .001

0.18

0.04

0.11

0.26

< .001

General Open-Mindedness

1.02

0.23

0.57

1.46

< .001

1.16

0.26

0.65

1.66

< .001

Open-Mindedness on Moral Issues

1.14

0.41

0.33

1.96

0.006

1.48

0.46

0.57

2.39

0.002

Intellectual Humility (CIHS)

-0.21

0.28

-0.76

0.33

0.439

0.32

0.33

-0.33

0.96

0.337

Intellectual Humility (MMIH)

-0.53

0.51

-1.54

0.47

0.297

0.09

0.61

-1.11

1.28

0.885

Belief in Moral Objectivity

3.82

0.77

2.30

5.34

< .001

2.79

0.93

0.97

4.62

0.003

Perceived Valence of AI Chatbots as Moral Advisors

0.20

0.04

0.12

0.27

< .001

0.17

0.05

0.08

0.27

< .001

Perceived Authority of AI Chatbots as Moral Advisors

0.21

0.04

0.13

0.30

< .001

0.20

0.05

0.10

0.30

< .001

Table 2: Study 2: Religious Behavior Score Predicting Each Potential Mediator

Model 1

Model 2

Study

b

SE

95% CI LL

95% CI UL

p

b

SE

95% CI LL

95% CI UL

p

Overall Frequency of Seeking Moral Advice

0.24

0.03

0.17

0.30

< .001

0.22

0.04

0.15

0.29

< .001

Overall Interest in Seeking Moral Advice

0.20

0.03

0.13

0.27

< .001

0.20

0.04

0.12

0.27

< .001

Open-Mindedness on Moral Issues

1.01

0.46

0.10

1.92

0.030

0.93

0.52

-0.09

1.96

0.074

Belief in Moral Objectivity

0.30

0.05

0.20

0.40

< .001

0.23

0.06

0.12

0.34

< .001

Perceived Authority of AI Chatbots as Moral Advisors

0.16

0.05

0.06

0.25

0.001

0.07

0.05

-0.03

0.18

0.178

Tendency to Anthropomorphize AI Chatbots

0.12

0.03

0.06

0.18

< .001

0.09

0.03

0.03

0.16

0.004

Fear of Negative Judgement

-0.35

0.33

-1.00

0.30

0.290

-0.04

0.36

-0.75

0.67

0.913

Self-Reflective Tendencies

0.44

0.36

-0.26

1.15

0.213

0.84

0.41

0.04

1.63

0.040

Deference to Authority (MFQ-2)

1.77

0.15

1.48

2.07

< .001

1.00

0.15

0.71

1.30

< .001

Deference to Authority (MAC-Q)

13.40

2.12

9.24

17.56

< .001

6.40

2.32

1.83

10.96

0.006

In Study 1, Religious Behavior Score significantly predicted most potential mediators. In both simple and multiple regression models, Religious Behavior Score predicted overall frequency of and interest in seeking moral advice from non-AI, non-religious sources (cluster a), general open-mindedness, open-mindedness on moral issues, and intellectual humility measured by CIHS and MMIH (cluster b), belief in moral objectivity (cluster c), and perceived valence and authority of AI chatbots as moral advisors (cluster d). Intellectual humility (CIHS and MMIH; cluster b), was non-significant in both models and therefore excluded from subsequent analyses. In Study 2, Religious Behavior Score similarly predicted overall frequency of and interest in seeking moral advice (cluster a), open-mindedness on moral issues (cluster b), belief in moral objectivity (cluster c), and perceived authority of AI chatbots (cluster d). Among the newly added Study 2 measures, tendency to anthropomorphize AI chatbots (cluster e) and both measures of deference to authority (cluster f) were significantly predicted by Religious Behavior Score. Fear of negative judgment and self-reflective tendencies (cluster f) were non-significant and excluded from subsequent mediation analysis.

Table 3: Study 1: Individual Mediation Analyses

Path a

Path b

Indirect (ab)

Direct c'

Full Model

Mediators

X -> M

M -> Y
(given X)

Effect

95% CI LL

95% CI UL

X -> Y
(given M)

Proportion
Mediated

Overall Frequency of Seeking Moral Advice

0.26***

1.00***

0.26***

0.18

0.33

0.06

81.23%

Overall Interest in Seeking Moral Advice

0.21***

0.91***

0.19***

0.12

0.26

0.13**

58.85%

General Open-Mindedness

1.02***

0.08***

0.08***

0.04

0.12

0.24***

24.73%

Open-Mindedness on Moral Issues

1.14**

0.04***

0.04**

0.01

0.08

0.27***

13.79%

Belief in Moral Objectivity

3.82***

0.00

0.00

-0.03

0.03

0.31***

1.03%

Perceived Valence of AI Chatbots as Moral Advisors

0.20***

0.57***

0.11***

0.07

0.16

0.20***

35.22%

Perceived Authority of AI Chatbots as Moral Advisors

0.21***

0.55***

0.12***

0.07

0.17

0.20***

37.55%

Table 4: Study 2: Individual Mediation Analyses

Path a

Path b

Indirect (ab)

Direct c'

Full Model

Mediators

X -> M

M -> Y
(given X)

Effect

95% CI LL

95% CI UL

X -> Y
(given M)

Proportion
Mediated

Overall Frequency of Seeking Moral Advice

0.24***

0.99***

0.24***

0.16

0.31

0.12*

66.04%

Overall Interest in Seeking Moral Advice

0.20***

0.82***

0.16***

0.11

0.23

0.19***

46.05%

Open-Mindedness on Moral Issues

1.01*

0.04***

0.04*

0.00

0.09

0.31***

12.22%

Belief in Moral Objectivity

0.30***

0.04

0.01

-0.02

0.05

0.34***

3.72%

Perceived Authority of AI Chatbots as Moral Advisors

0.16**

0.71***

0.11***

0.04

0.18

0.24***

31.55%

Tendency to Anthropomorphize AI Chatbots

0.12***

0.78***

0.09***

0.05

0.14

0.26***

26.03%

Self-Reflective Tendencies

0.44

0.02**

0.01

-0.01

0.03

0.35***

3.09%

Deference to Authority (MFQ-2)

1.77***

0.06**

0.11***

0.04

0.18

0.24***

31.76%

Deference to Authority (MAC-Q)

13.40***

0.01***

0.08***

0.04

0.13

0.28***

22.48%

Across both studies, the relationship between Religious Behavior Score and frequency of seeking moral advice from AI chatbots was most strongly mediated by individuals’ overall tendency to seek moral advice from diverse non-AI, non-religious sources (cluster a). Perceived authority of AI chatbots as moral advisors (cluster d) served as a consistent secondary mediator in both studies, and open-mindedness on moral issues (cluster b) contributed to a smaller but reliable indirect effect. Among the new mediators introduced in Study 2, deference to authority (cluster f; both MFQ-2 and MAC-Q) and tendency to anthropomorphize AI chatbots (cluster e) also emerged as significant indirect pathways. Belief in moral objectivity (cluster c), fear of negative judgment and self-reflective tendencies (cluster f; Study 2 only) do not significantly mediate the relationship between Religious Behavior Score and frequency of seeking moral advice from AI chatbots and were excluded from parallel mediation analyses.

To examine the contribution of each mediator while accounting for their intercorrelations, we constructed a parallel mediation model for each study using the lavaan package in R (Rosseel 2012). For each study, we selected the mediator with the highest proportion of variance mediated from each cluster significantly predicted by Religious Behavior Score. For Study 1 we included three mediators: overall frequency of seeking moral advice from non-AI sources (cluster a), open-mindedness on moral issues (cluster b), and perceived authority of AI chatbots as moral advisors (cluster d). Study 2 retained these three and added tendency to anthropomorphize AI chatbots (cluster e) and deference to authority — MFQ-2 (cluster f). All mediators included were allowed to intercorrelate, making both models saturated. Specific indirect effects were estimated using 5,000 bootstrap resamples.

Figure 1: Study 1: Parallel Mediation Model
Table 5: Study 1: Full Parameter Estimates — Parallel Mediation

Path

b

β

SE

95% CI LL

95% CI UL

p

Religious Behavior Score → Overall Frequency of Seeking Moral Advice

0.256

0.369

0.035

0.186

0.325

<.001

Religious Behavior Score → Open-Mindedness on Moral Issues

1.141

0.146

0.410

0.342

1.961

0.005

Religious Behavior Score → Perceived Authority of AI Chatbots as Moral Advisors

0.215

0.262

0.043

0.130

0.299

<.001

Overall Frequency of Seeking Moral Advice → AI Moral Frequency

0.882

0.600

0.064

0.756

1.003

<.001

Open-Mindedness on Moral Issues → AI Moral Frequency

-0.002

-0.016

0.005

-0.013

0.009

0.702

Perceived Authority of AI Chatbots as Moral Advisors → AI Moral Frequency

0.310

0.249

0.057

0.197

0.421

<.001

Religious Behavior Score → AI Moral Frequency (direct)

0.026

0.025

0.040

-0.054

0.103

0.528

Overall Frequency of Seeking Moral Advice ~~ Open-Mindedness on Moral Issues

9.059

0.461

1.101

6.833

11.161

<.001

Overall Frequency of Seeking Moral Advice ~~ Perceived Authority of AI Chatbots as Moral Advisors

0.691

0.342

0.117

0.460

0.918

<.001

Open-Mindedness on Moral Issues ~~ Perceived Authority of AI Chatbots as Moral Advisors

4.683

0.194

1.325

2.013

7.274

<.001

Indirect via Overall Frequency of Seeking Moral Advice

0.226

0.221

0.037

0.157

0.302

<.001

Indirect via Open-Mindedness on Moral Issues

-0.002

-0.002

0.007

-0.016

0.010

0.720

Indirect via Perceived Authority of AI Chatbots as Moral Advisors

0.067

0.065

0.019

0.034

0.108

<.001

Total indirect effect

0.290

0.284

0.041

0.210

0.373

<.001

Figure 2: Study 2: Parallel Mediation Model
Table 6: Study 2: Full Parameter Estimates — Parallel Mediation

Path

b

β

SE

95% CI LL

95% CI UL

p

Religious Behavior Score → Overall Frequency of Seeking Moral Advice

0.239

0.367

0.034

0.172

0.305

<.001

Religious Behavior Score → Open-Mindedness on Moral Issues

1.008

0.117

0.465

0.100

1.919

0.030

Religious Behavior Score → Perceived Authority of AI Chatbots as Moral Advisors

0.158

0.175

0.048

0.062

0.250

<.001

Religious Behavior Score → Tendency to Anthropomorphize AI Chatbots

0.120

0.223

0.030

0.062

0.177

<.001

Religious Behavior Score → Deference to Authority (MFQ-2)

1.775

0.535

0.143

1.496

2.053

<.001

Overall Frequency of Seeking Moral Advice → AI Moral Frequency

0.679

0.409

0.081

0.520

0.841

<.001

Open-Mindedness on Moral Issues → AI Moral Frequency

0.004

0.032

0.006

-0.007

0.015

0.474

Perceived Authority of AI Chatbots as Moral Advisors → AI Moral Frequency

0.512

0.429

0.060

0.396

0.629

<.001

Tendency to Anthropomorphize AI Chatbots → AI Moral Frequency

0.086

0.043

0.092

-0.093

0.269

0.349

Deference to Authority (MFQ-2) → AI Moral Frequency

-0.004

-0.011

0.014

-0.032

0.025

0.798

Religious Behavior Score → AI Moral Frequency (direct)

0.106

0.098

0.050

0.009

0.205

0.032

Overall Frequency of Seeking Moral Advice ~~ Open-Mindedness on Moral Issues

9.482

0.462

1.130

7.261

11.710

<.001

Overall Frequency of Seeking Moral Advice ~~ Perceived Authority of AI Chatbots as Moral Advisors

0.756

0.355

0.121

0.517

0.990

<.001

Overall Frequency of Seeking Moral Advice ~~ Tendency to Anthropomorphize AI Chatbots

0.312

0.249

0.075

0.167

0.461

<.001

Overall Frequency of Seeking Moral Advice ~~ Deference to Authority (MFQ-2)

1.292

0.193

0.368

0.587

2.021

<.001

Open-Mindedness on Moral Issues ~~ Perceived Authority of AI Chatbots as Moral Advisors

9.002

0.298

1.662

5.764

12.282

<.001

Open-Mindedness on Moral Issues ~~ Tendency to Anthropomorphize AI Chatbots

5.064

0.285

1.034

3.122

7.094

<.001

Open-Mindedness on Moral Issues ~~ Deference to Authority (MFQ-2)

12.305

0.129

5.494

1.433

23.014

0.025

Perceived Authority of AI Chatbots as Moral Advisors ~~ Tendency to Anthropomorphize AI Chatbots

1.017

0.552

0.104

0.810

1.216

<.001

Perceived Authority of AI Chatbots as Moral Advisors ~~ Deference to Authority (MFQ-2)

2.058

0.208

0.564

0.932

3.173

<.001

Tendency to Anthropomorphize AI Chatbots ~~ Deference to Authority (MFQ-2)

1.398

0.241

0.345

0.730

2.080

<.001

Indirect via Overall Frequency of Seeking Moral Advice

0.162

0.150

0.031

0.104

0.226

<.001

Indirect via Open-Mindedness on Moral Issues

0.004

0.004

0.006

-0.008

0.018

0.526

Indirect via Perceived Authority of AI Chatbots as Moral Advisors

0.081

0.075

0.026

0.031

0.133

0.002

Indirect via Tendency to Anthropomorphize AI Chatbots

0.010

0.010

0.012

-0.011

0.036

0.381

Indirect via Deference to Authority (MFQ-2)

-0.006

-0.006

0.025

-0.055

0.044

0.798

Total indirect effect

0.251

0.232

0.052

0.152

0.354

<.001

Overall, in Study 1, the parallel mediation model explained 54.6% of the variance in frequency of seeking AI moral advice (=0.546). The total indirect effect was b=0.29 (95% CI [0.21, 0.373]), with a residual direct effect of b=0.026 (p=0.528). In Study 2, the model explained 59.3% of the variance (=0.593), with a total indirect effect of b=0.251 (95% CI [0.152, 0.354]) and a direct effect of b=0.106 (p=0.032).

The parallel mediation models revealed a consistent two-pathway pattern across both studies. The indirect effect via disposition to moral consultation from non-AI, non-religious sources (cluster a) was the dominant pathway in both studies, followed by a significant indirect effect via perceived authority of AI chatbots as moral advisors (cluster d). In contrast, the indirect effect via open-mindedness on moral issues (cluster b) was not significant in either study. In Study 2, neither the indirect effect via tendency to anthropomorphize AI chatbots (cluster e) nor via deference to authority (cluster f) reached significance.

References

Rosseel, Yves. 2012. “Lavaan: An R Package for Structural Equation Modeling.” Journal of Statistical Software 48 (2): 1–36. https://doi.org/10.18637/jss.v048.i02.