When Users Open Up To AI: Exploring Online Privacy Concerns In The Context Of AI-chatbots
(2025) SYSK16 20251Department of Informatics
- Abstract
- The rapidly increasing use of AI-chatbots creates ever evolving questions of privacy concerns in end users where they have to evaluate where the line is drawn between personalization and privacy intrusion. The conducted literature review highlights the complex landscape of privacy theories, privacy concerns and human-AI interactions. Using this as a guide, a qualitative study has been conducted on 10 different end users of AI-chatbots in order to further our understanding of end users’ perceptions of information privacy when interacting with AI-chatbots. This study shows that many users feel uninformed regarding how their personal information is collected and used by the providers of these AI-services. Users largely accept that their... (More)
- The rapidly increasing use of AI-chatbots creates ever evolving questions of privacy concerns in end users where they have to evaluate where the line is drawn between personalization and privacy intrusion. The conducted literature review highlights the complex landscape of privacy theories, privacy concerns and human-AI interactions. Using this as a guide, a qualitative study has been conducted on 10 different end users of AI-chatbots in order to further our understanding of end users’ perceptions of information privacy when interacting with AI-chatbots. This study shows that many users feel uninformed regarding how their personal information is collected and used by the providers of these AI-services. Users largely accept that their conversations with AI-chatbots are collected and used, seeing this as the trade-off for having access to these tools. Simultaneously, users largely feel a desire for more information as well as more control over which data is being collected and how it is allowed to be used. These findings suggest a growing need for transparent data practices and user-centric design approaches to ensure that personalization does not come at the cost of user trust or autonomy. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/student-papers/record/9199724
- author
- Wollter, Josef LU ; Skog, Nils LU and Stensson, David LU
- supervisor
- organization
- course
- SYSK16 20251
- year
- 2025
- type
- M2 - Bachelor Degree
- subject
- keywords
- AI-chatbots, Privacy Concerns, Information Privacy, IUIPC, Privacy Paradox
- language
- English
- id
- 9199724
- date added to LUP
- 2025-06-16 11:40:14
- date last changed
- 2025-06-16 11:40:14
@misc{9199724, abstract = {{The rapidly increasing use of AI-chatbots creates ever evolving questions of privacy concerns in end users where they have to evaluate where the line is drawn between personalization and privacy intrusion. The conducted literature review highlights the complex landscape of privacy theories, privacy concerns and human-AI interactions. Using this as a guide, a qualitative study has been conducted on 10 different end users of AI-chatbots in order to further our understanding of end users’ perceptions of information privacy when interacting with AI-chatbots. This study shows that many users feel uninformed regarding how their personal information is collected and used by the providers of these AI-services. Users largely accept that their conversations with AI-chatbots are collected and used, seeing this as the trade-off for having access to these tools. Simultaneously, users largely feel a desire for more information as well as more control over which data is being collected and how it is allowed to be used. These findings suggest a growing need for transparent data practices and user-centric design approaches to ensure that personalization does not come at the cost of user trust or autonomy.}}, author = {{Wollter, Josef and Skog, Nils and Stensson, David}}, language = {{eng}}, note = {{Student Paper}}, title = {{When Users Open Up To AI: Exploring Online Privacy Concerns In The Context Of AI-chatbots}}, year = {{2025}}, }