Public sentiment analysis and topic modeling regarding ChatGPT in mental
health on Reddit: Negative sentiments increase over time
- URL: http://arxiv.org/abs/2311.15800v1
- Date: Mon, 27 Nov 2023 13:23:11 GMT
- Title: Public sentiment analysis and topic modeling regarding ChatGPT in mental
health on Reddit: Negative sentiments increase over time
- Authors: Yunna Cai, Fan Wang, Haowei Wang, Qianwen Qian
- Abstract summary: Researchers used the bert-base-multilingual-uncased-sentiment techniques for sentiment analysis and the BERTopic model for topic modeling.
It was found that overall, negative sentiments prevail, followed by positive ones, with neutral sentiments being the least common.
- Score: 9.874529201649192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In order to uncover users' attitudes towards ChatGPT in mental health, this
study examines public opinions about ChatGPT in mental health discussions on
Reddit. Researchers used the bert-base-multilingual-uncased-sentiment
techniques for sentiment analysis and the BERTopic model for topic modeling. It
was found that overall, negative sentiments prevail, followed by positive ones,
with neutral sentiments being the least common. The prevalence of negative
emotions has increased over time. Negative emotions encompass discussions on
ChatGPT providing bad mental health advice, debates on machine vs. human value,
the fear of AI, and concerns about Universal Basic Income (UBI). In contrast,
positive emotions highlight ChatGPT's effectiveness in counseling, with
mentions of keywords like "time" and "wallet." Neutral discussions center
around private data concerns. These findings shed light on public attitudes
toward ChatGPT in mental health, potentially contributing to the development of
trustworthy AI in mental health from the public perspective.
Related papers
- Primacy Effect of ChatGPT [69.49920102917598]
We study the primacy effect of ChatGPT: the tendency of selecting the labels at earlier positions as the answer.
We hope that our experiments and analyses provide additional insights into building more reliable ChatGPT-based solutions.
arXiv Detail & Related papers (2023-10-20T00:37:28Z) - Exploring ChatGPT's Empathic Abilities [0.138120109831448]
This study investigates the extent to which ChatGPT based on GPT-3.5 can exhibit empathetic responses and emotional expressions.
In 91.7% of the cases, ChatGPT was able to correctly identify emotions and produces appropriate answers.
In conversations, ChatGPT reacted with a parallel emotion in 70.7% of cases.
arXiv Detail & Related papers (2023-08-07T12:23:07Z) - Public Attitudes Toward ChatGPT on Twitter: Sentiments, Topics, and
Occupations [1.6466986427682635]
We investigated public attitudes towards ChatGPT by applying natural language processing techniques such as sentiment analysis.
Our sentiment analysis result indicates that the overall sentiment was largely neutral to positive, and negative sentiments were decreasing over time.
Our topic model reveals that the most popular topics discussed were Education, Bard, Search Engines, OpenAI, Marketing, and Cybersecurity.
arXiv Detail & Related papers (2023-06-22T15:10:18Z) - Does ChatGPT have Theory of Mind? [2.3129337924262927]
Theory of Mind (ToM) is the ability to understand human thinking and decision-making.
This paper investigates what extent recent Large Language Models in the ChatGPT tradition possess ToM.
arXiv Detail & Related papers (2023-05-23T12:55:21Z) - ChatGPT for Us: Preserving Data Privacy in ChatGPT via Dialogue Text
Ambiguation to Expand Mental Health Care Delivery [52.73936514734762]
ChatGPT has gained popularity for its ability to generate human-like dialogue.
Data-sensitive domains face challenges in using ChatGPT due to privacy and data-ownership concerns.
We propose a text ambiguation framework that preserves user privacy.
arXiv Detail & Related papers (2023-05-19T02:09:52Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - On the Robustness of ChatGPT: An Adversarial and Out-of-distribution
Perspective [67.98821225810204]
We evaluate the robustness of ChatGPT from the adversarial and out-of-distribution perspective.
Results show consistent advantages on most adversarial and OOD classification and translation tasks.
ChatGPT shows astounding performance in understanding dialogue-related texts.
arXiv Detail & Related papers (2023-02-22T11:01:20Z) - ChatGPT: A Meta-Analysis after 2.5 Months [16.62394237011141]
We analyze over 300,000 tweets and more than 150 scientific papers to investigate how ChatGPT is perceived and discussed.
Our findings show that ChatGPT is generally viewed as of high quality, with positive sentiment and emotions of joy dominating in social media.
arXiv Detail & Related papers (2023-02-20T15:43:22Z) - Can ChatGPT Understand Too? A Comparative Study on ChatGPT and
Fine-tuned BERT [103.57103957631067]
ChatGPT has attracted great attention, as it can generate fluent and high-quality responses to human inquiries.
We evaluate ChatGPT's understanding ability by evaluating it on the most popular GLUE benchmark, and comparing it with 4 representative fine-tuned BERT-style models.
We find that: 1) ChatGPT falls short in handling paraphrase and similarity tasks; 2) ChatGPT outperforms all BERT models on inference tasks by a large margin; 3) ChatGPT achieves comparable performance compared with BERT on sentiment analysis and question answering tasks.
arXiv Detail & Related papers (2023-02-19T12:29:33Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - The Effect of Moderation on Online Mental Health Conversations [17.839146423209474]
The presence of a moderator increased user engagement, encouraged users to discuss negative emotions more candidly, and dramatically reduced bad behavior among chat participants.
Our findings suggest that moderation can serve as a valuable tool to improve the efficacy and safety of online mental health conversations.
arXiv Detail & Related papers (2020-05-19T05:40:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.