Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

The effect of AI recommender systems on consumer trust and purchase intention under varying levels of risk

Hollmann, Erik LU and Ageman, Frida LU (2025) BUSN39 20251
Department of Business Administration
Abstract
Purpose: This thesis investigates how the source of product recommendations, specifically, whether delivered by a human expert or an AI system, affects consumer trust and subsequent purchase intentions during the evaluation stage of the online consumer journey. The research further examines how these effects are moderated by the level of perceived risk.

Main Research Question: How does the use of AI versus human recommendation sources influence consumer trust and purchase intention in e-commerce, and to what extent does perceived risk moderate these relationships?

Methodology: The study employs a 2 × 2 between-subjects factorial experimental design, manipulating both the source of the recommendation (human versus AI) and the... (More)
Purpose: This thesis investigates how the source of product recommendations, specifically, whether delivered by a human expert or an AI system, affects consumer trust and subsequent purchase intentions during the evaluation stage of the online consumer journey. The research further examines how these effects are moderated by the level of perceived risk.

Main Research Question: How does the use of AI versus human recommendation sources influence consumer trust and purchase intention in e-commerce, and to what extent does perceived risk moderate these relationships?

Methodology: The study employs a 2 × 2 between-subjects factorial experimental design, manipulating both the source of the recommendation (human versus AI) and the perceived risk associated with the purchase decision (low versus high). A sample of 274 consumers was randomly assigned to one of four experimental conditions, where participants evaluated wireless headphone purchase scenarios based on recommendations from either a human or AI in varying risk contexts. Trust was measured multidimensionally (ability, benevolence, integrity) alongside perceived risk (manipulation check) and purchase intention, using validated 7-point Likert scales. Data were analyzed using partial least squares structural equation modeling (PLS-SEM), allowing robust examination of direct, mediating, and moderating effects.

Findings / Conclusion: Results demonstrate that trust in AI recommendations is significantly and consistently lower than trust in human recommendations, with the trust gap widening in high-risk scenarios. Specifically, relabeling a recommendation from human to AI reduced trust by two-thirds of a standard deviation (β = -.673), nearly twice the impact of moving from low to high perceived risk (β = -.369). Trust fully mediated the effect of recommendation source on purchase intention, with the direct path from AI source to purchase intention rendered nonsignificant (β = –.028) when trust was included in the model. These findings reveal a structural trust deficit for AI recommenders, particularly in high-stakes scenarios, that current technical advances in personalization and efficiency do not overcome.

Theoretical and Managerial Contributions: Theoretically, this thesis extends the S-O-R framework by showing that perceived risk not only moderates the effect of recommender source on trust but also intensifies the mediation of trust on purchase intention. The integration of signaling theory and prospect theory provides new insight into why AI systems struggle to convey integrity, a critical driver of behavioral response, despite advances in technical ability. Managerially, the findings caution firms against over-reliance on AI recommenders for conversion, particularly in risk-sensitive scenarios, and suggest that a hybrid approach, leveraging AI personalization while embedding visible human validation at key stages, can help restore the trust necessary to drive purchase decisions. Addressing the integrity gap of AI systems emerges as a commercial imperative for realizing the full value of automated recommendation technologies. (Less)
Please use this url to cite or link to this publication:
author
Hollmann, Erik LU and Ageman, Frida LU
supervisor
organization
course
BUSN39 20251
year
type
H1 - Master's Degree (One Year)
subject
keywords
AI Recommender Systems, Consumer Trust, Perceived Risk, Purchase Intention, S-O-R Framework, Signaling Theory, Prospect Theory, Experimental Survey, E-commerce
language
English
id
9205608
date added to LUP
2025-06-30 12:11:06
date last changed
2025-06-30 12:11:06
@misc{9205608,
  abstract     = {{Purpose: This thesis investigates how the source of product recommendations, specifically, whether delivered by a human expert or an AI system, affects consumer trust and subsequent purchase intentions during the evaluation stage of the online consumer journey. The research further examines how these effects are moderated by the level of perceived risk.

Main Research Question: How does the use of AI versus human recommendation sources influence consumer trust and purchase intention in e-commerce, and to what extent does perceived risk moderate these relationships?

Methodology: The study employs a 2 × 2 between-subjects factorial experimental design, manipulating both the source of the recommendation (human versus AI) and the perceived risk associated with the purchase decision (low versus high). A sample of 274 consumers was randomly assigned to one of four experimental conditions, where participants evaluated wireless headphone purchase scenarios based on recommendations from either a human or AI in varying risk contexts. Trust was measured multidimensionally (ability, benevolence, integrity) alongside perceived risk (manipulation check) and purchase intention, using validated 7-point Likert scales. Data were analyzed using partial least squares structural equation modeling (PLS-SEM), allowing robust examination of direct, mediating, and moderating effects.

Findings / Conclusion: Results demonstrate that trust in AI recommendations is significantly and consistently lower than trust in human recommendations, with the trust gap widening in high-risk scenarios. Specifically, relabeling a recommendation from human to AI reduced trust by two-thirds of a standard deviation (β = -.673), nearly twice the impact of moving from low to high perceived risk (β = -.369). Trust fully mediated the effect of recommendation source on purchase intention, with the direct path from AI source to purchase intention rendered nonsignificant (β = –.028) when trust was included in the model. These findings reveal a structural trust deficit for AI recommenders, particularly in high-stakes scenarios, that current technical advances in personalization and efficiency do not overcome.

Theoretical and Managerial Contributions: Theoretically, this thesis extends the S-O-R framework by showing that perceived risk not only moderates the effect of recommender source on trust but also intensifies the mediation of trust on purchase intention. The integration of signaling theory and prospect theory provides new insight into why AI systems struggle to convey integrity, a critical driver of behavioral response, despite advances in technical ability. Managerially, the findings caution firms against over-reliance on AI recommenders for conversion, particularly in risk-sensitive scenarios, and suggest that a hybrid approach, leveraging AI personalization while embedding visible human validation at key stages, can help restore the trust necessary to drive purchase decisions. Addressing the integrity gap of AI systems emerges as a commercial imperative for realizing the full value of automated recommendation technologies.}},
  author       = {{Hollmann, Erik and Ageman, Frida}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{The effect of AI recommender systems on consumer trust and purchase intention under varying levels of risk}},
  year         = {{2025}},
}