If Eleanor Rigby Had Met ChatGPT: A Study on Loneliness in a Post-LLM World
- URL: http://arxiv.org/abs/2412.01617v1
- Date: Mon, 02 Dec 2024 15:39:00 GMT
- Title: If Eleanor Rigby Had Met ChatGPT: A Study on Loneliness in a Post-LLM World
- Authors: Adrian de Wynter,
- Abstract summary: Loneliness significantly impacts a person's mental and physical well-being.<n>Previous research suggests that large language models (LLMs) may help mitigate loneliness.<n>We argue that the use of widespread LLMs like ChatGPT is more prevalent--and riskier, as they are not designed for this purpose.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Loneliness, or the lack of fulfilling relationships, significantly impacts a person's mental and physical well-being and is prevalent worldwide. Previous research suggests that large language models (LLMs) may help mitigate loneliness. However, we argue that the use of widespread LLMs like ChatGPT is more prevalent--and riskier, as they are not designed for this purpose. To explore this, we analysed user interactions with ChatGPT, particularly those outside of its marketed use as task-oriented assistant. In dialogues classified as lonely, users frequently (37%) sought advice or validation, and received good engagement. However, ChatGPT failed in sensitive scenarios, like responding appropriately to suicidal ideation or trauma. We also observed a 35% higher incidence of toxic content, with women being 22 times more likely to be targeted than men. Our findings underscore ethical and legal questions about this technology, and note risks like radicalisation or further isolation. We conclude with recommendations for research and industry to address loneliness.
Related papers
- Investigating Affective Use and Emotional Well-being on ChatGPT [32.797983866308755]
We investigate the extent to which interactions with ChatGPT may impact users' emotional well-being, behaviors and experiences.
We analyze over 3 million conversations for affective cues and surveying over 4,000 users on their perceptions of ChatGPT.
We conduct an Institutional Review Board (IRB)-approved randomized controlled trial (RCT) on close to 1,000 participants over 28 days.
arXiv Detail & Related papers (2025-04-04T19:22:10Z) - 10 Questions to Fall in Love with ChatGPT: An Experimental Study on Interpersonal Closeness with Large Language Models (LLMs) [0.0]
This study explores how individuals experience closeness and romantic interest in dating profiles, depending on whether they believe the profiles are human- or AI-generated.
Surprisingly, perceived source (human or AI) had no significant impact on closeness or romantic interest.
arXiv Detail & Related papers (2025-03-24T13:00:36Z) - AI Companions Reduce Loneliness [0.5699788926464752]
We focus on AI companions applications designed to provide consumers with synthetic interaction partners.
Study 1 and 2 find suggestive evidence that consumers use AI companions to alleviate loneliness.
Study 3 finds that AI companions successfully alleviate loneliness on par only with interacting with another person.
Study 4 uses a longitudinal design and finds that an AI companion consistently reduces loneliness over the course of a week.
arXiv Detail & Related papers (2024-07-09T15:04:08Z) - Eagle: Ethical Dataset Given from Real Interactions [74.7319697510621]
We create datasets extracted from real interactions between ChatGPT and users that exhibit social biases, toxicity, and immoral problems.
Our experiments show that Eagle captures complementary aspects, not covered by existing datasets proposed for evaluation and mitigation of such ethical challenges.
arXiv Detail & Related papers (2024-02-22T03:46:02Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Can LLMs Keep a Secret? Testing Privacy Implications of Language Models via Contextual Integrity Theory [82.7042006247124]
We show that even the most capable AI models reveal private information in contexts that humans would not, 39% and 57% of the time, respectively.
Our work underscores the immediate need to explore novel inference-time privacy-preserving approaches, based on reasoning and theory of mind.
arXiv Detail & Related papers (2023-10-27T04:15:30Z) - Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - LonXplain: Lonesomeness as a Consequence of Mental Disturbance in Reddit
Posts [0.41998444721319217]
Social media is a potential source of information that infers latent mental states through Natural Language Processing (NLP)
Existing literature on psychological theories points to loneliness as the major consequence of interpersonal risk factors.
We formulate lonesomeness detection in social media posts as an explainable binary classification problem.
arXiv Detail & Related papers (2023-05-30T04:21:24Z) - Does ChatGPT have Theory of Mind? [2.3129337924262927]
Theory of Mind (ToM) is the ability to understand human thinking and decision-making.
This paper investigates what extent recent Large Language Models in the ChatGPT tradition possess ToM.
arXiv Detail & Related papers (2023-05-23T12:55:21Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - Towards Designing a ChatGPT Conversational Companion for Elderly People [0.0]
We propose a ChatGPT-based conversational companion system for elderly people.
The system is designed to provide companionship and help reduce feelings of loneliness and social isolation.
arXiv Detail & Related papers (2023-04-18T17:24:14Z) - ChatGPT Needs SPADE (Sustainability, PrivAcy, Digital divide, and Ethics) Evaluation: A Review [14.370728657204463]
ChatGPT is another large language model (LLM) vastly available for the consumers on their devices.
This study focuses on the important aspects that are mostly overlooked, i.e. sustainability, privacy, digital divide, and ethics.
arXiv Detail & Related papers (2023-04-13T16:01:28Z) - Toxicity in ChatGPT: Analyzing Persona-assigned Language Models [23.53559226972413]
Large language models (LLMs) have shown incredible capabilities and transcended the natural language processing (NLP) community.
We systematically evaluate toxicity in over half a million generations of ChatGPT, a popular dialogue-based LLM.
We find that setting the system parameter of ChatGPT by assigning it a persona, significantly increases the toxicity of generations.
arXiv Detail & Related papers (2023-04-11T16:53:54Z) - Safety Analysis in the Era of Large Language Models: A Case Study of
STPA using ChatGPT [11.27440170845105]
Using ChatGPT without human intervention may be inadequate due to reliability related issues, but with careful design, it may outperform human experts.
No statistically significant differences are found when varying the semantic complexity or using common prompt guidelines.
arXiv Detail & Related papers (2023-04-03T16:46:49Z) - Consistency Analysis of ChatGPT [65.268245109828]
This paper investigates the trustworthiness of ChatGPT and GPT-4 regarding logically consistent behaviour.
Our findings suggest that while both models appear to show an enhanced language understanding and reasoning ability, they still frequently fall short of generating logically consistent predictions.
arXiv Detail & Related papers (2023-03-11T01:19:01Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation,
and Detection [8.107721810172112]
ChatGPT is able to respond effectively to a wide range of human questions.
People are starting to worry about the potential negative impacts that large language models (LLMs) like ChatGPT could have on society.
In this work, we collected tens of thousands of comparison responses from both human experts and ChatGPT.
arXiv Detail & Related papers (2023-01-18T15:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.