Ask ChatGPT: Caveats and Mitigations for Individual Users of AI Chatbots
- URL: http://arxiv.org/abs/2508.10272v1
- Date: Thu, 14 Aug 2025 01:40:13 GMT
- Title: Ask ChatGPT: Caveats and Mitigations for Individual Users of AI Chatbots
- Authors: Chengen Wang, Murat Kantarcioglu,
- Abstract summary: ChatGPT and other Large Language Model (LLM)-based AI chatbots become increasingly integrated into individuals' daily lives.<n>What concerns and risks do these systems pose for individual users?<n>What potential harms might they cause, and how can these be mitigated?
- Score: 10.977907906989342
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As ChatGPT and other Large Language Model (LLM)-based AI chatbots become increasingly integrated into individuals' daily lives, important research questions arise. What concerns and risks do these systems pose for individual users? What potential harms might they cause, and how can these be mitigated? In this work, we review recent literature and reports, and conduct a comprehensive investigation into these questions. We begin by explaining how LLM-based AI chatbots work, providing essential background to help readers understand chatbots' inherent limitations. We then identify a range of risks associated with individual use of these chatbots, including hallucinations, intrinsic biases, sycophantic behavior, cognitive decline from overreliance, social isolation, and privacy leakage. Finally, we propose several key mitigation strategies to address these concerns. Our goal is to raise awareness of the potential downsides of AI chatbot use, and to empower users to enhance, rather than diminish, human intelligence, to enrich, rather than compromise, daily life.
Related papers
- Exploring User Security and Privacy Attitudes and Concerns Toward the Use of General-Purpose LLM Chatbots for Mental Health [5.3052849646510225]
Large language model (LLM)-enabled conversational agents for emotional support are increasingly being used by individuals.<n>Little empirical research measures users' privacy and security concerns, attitudes, and expectations.<n>We identify critical misconceptions and a general lack of risk awareness.<n>We propose recommendations to safeguard user mental health disclosures.
arXiv Detail & Related papers (2025-07-14T18:10:21Z) - Artificial Empathy: AI based Mental Health [0.0]
Many people suffer from mental health problems but not everyone seeks professional help or has access to mental health care.<n> AI chatbots have increasingly become a go-to for individuals who either have mental disorders or simply want someone to talk to.
arXiv Detail & Related papers (2025-05-30T02:36:56Z) - A Complete Survey on LLM-based AI Chatbots [46.18523139094807]
The past few decades have witnessed an upsurge in data, forming the foundation for data-hungry, learning-based AI technology.
Conversational agents, often referred to as AI chatbots, rely heavily on such data to train large language models (LLMs) and generate new content (knowledge) in response to user prompts.
This paper presents a complete survey of the evolution and deployment of LLM-based chatbots in various sectors.
arXiv Detail & Related papers (2024-06-17T09:39:34Z) - Quriosity: Analyzing Human Questioning Behavior and Causal Inquiry through Curiosity-Driven Queries [92.1651731484397]
We present Quriosity, a collection of 13.5K naturally occurring questions from three diverse sources.<n>Our analysis reveals a significant presence of causal questions (up to 42%) in the dataset.
arXiv Detail & Related papers (2024-05-30T17:55:28Z) - Critical Role of Artificially Intelligent Conversational Chatbot [0.0]
We explore scenarios involving ChatGPT's ethical implications within academic contexts.
We propose architectural solutions aimed at preventing inappropriate use and promoting responsible AI interactions.
arXiv Detail & Related papers (2023-10-31T14:08:07Z) - Evaluating Chatbots to Promote Users' Trust -- Practices and Open
Problems [11.427175278545517]
This paper reviews current practices for testing chatbots.
It identifies gaps as open problems in pursuit of user trust.
It outlines a path forward to mitigate issues of trust related to service or product performance, user satisfaction and long-term unintended consequences for society.
arXiv Detail & Related papers (2023-09-09T22:40:30Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - A Deep Learning Approach to Integrate Human-Level Understanding in a
Chatbot [0.4632366780742501]
Unlike humans, chatbots can serve multiple customers at a time, are available 24/7 and reply in less than a fraction of a second.
We performed sentiment analysis, emotion detection, intent classification and named-entity recognition using deep learning to develop chatbots with humanistic understanding and intelligence.
arXiv Detail & Related papers (2021-12-31T22:26:41Z) - CheerBots: Chatbots toward Empathy and Emotionusing Reinforcement
Learning [60.348822346249854]
This study presents a framework whereby several empathetic chatbots are based on understanding users' implied feelings and replying empathetically for multiple dialogue turns.
We call these chatbots CheerBots. CheerBots can be retrieval-based or generative-based and were finetuned by deep reinforcement learning.
To respond in an empathetic way, we develop a simulating agent, a Conceptual Human Model, as aids for CheerBots in training with considerations on changes in user's emotional states in the future to arouse sympathy.
arXiv Detail & Related papers (2021-10-08T07:44:47Z) - Put Chatbot into Its Interlocutor's Shoes: New Framework to Learn
Chatbot Responding with Intention [55.77218465471519]
This paper proposes an innovative framework to train chatbots to possess human-like intentions.
Our framework included a guiding robot and an interlocutor model that plays the role of humans.
We examined our framework using three experimental setups and evaluate the guiding robot with four different metrics to demonstrated flexibility and performance advantages.
arXiv Detail & Related papers (2021-03-30T15:24:37Z) - CASS: Towards Building a Social-Support Chatbot for Online Health
Community [67.45813419121603]
The CASS architecture is based on advanced neural network algorithms.
It can handle new inputs from users and generate a variety of responses to them.
With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support.
arXiv Detail & Related papers (2021-01-04T05:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.