ChatGPT and Vaccine Hesitancy: A Comparison of English, Spanish, and French Responses Using a Validated Scale
- URL: http://arxiv.org/abs/2407.09481v1
- Date: Mon, 6 May 2024 11:13:03 GMT
- Title: ChatGPT and Vaccine Hesitancy: A Comparison of English, Spanish, and French Responses Using a Validated Scale
- Authors: Saubhagya Joshi, Eunbin Ha, Yonaira Rivera, Vivek K. Singh,
- Abstract summary: We use the Vaccine Hesitancy Scale to measure the hesitancy of ChatGPT responses in English, Spanish, and French.
Results have implications for researchers interested in evaluating and improving the quality and equity of health-related web information.
- Score: 1.5133368155322295
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: ChatGPT is a popular information system (over 1 billion visits in August 2023) that can generate natural language responses to user queries. It is important to study the quality and equity of its responses on health-related topics, such as vaccination, as they may influence public health decision-making. We use the Vaccine Hesitancy Scale (VHS) proposed by Shapiro et al.1 to measure the hesitancy of ChatGPT responses in English, Spanish, and French. We find that: (a) ChatGPT responses indicate less hesitancy than those reported for human respondents in past literature; (b) ChatGPT responses vary significantly across languages, with English responses being the most hesitant on average and Spanish being the least; (c) ChatGPT responses are largely consistent across different model parameters but show some variations across the scale factors (vaccine competency, risk). Results have implications for researchers interested in evaluating and improving the quality and equity of health-related web information.
Related papers
- Large Language Models' Varying Accuracy in Recognizing Risk-Promoting and Health-Supporting Sentiments in Public Health Discourse: The Cases of HPV Vaccination and Heated Tobacco Products [2.0618817976970103]
Large Language Models (LLMs) have gained attention as a powerful technology, yet their accuracy and feasibility in capturing different opinions on health issues are largely unexplored.<n>This research examines how accurate the three prominent LLMs are in detecting risk-promoting versus health-supporting sentiments.<n>Specifically, models often show higher accuracy for risk-promoting sentiment on Facebook, whereas health-supporting messages on Twitter are more accurately detected.
arXiv Detail & Related papers (2025-07-06T11:57:02Z) - Working with Large Language Models to Enhance Messaging Effectiveness for Vaccine Confidence [0.276240219662896]
Vaccine hesitancy and misinformation are significant barriers to achieving widespread vaccination coverage.
This paper explores the potential of ChatGPT-augmented messaging to promote confidence in vaccination uptake.
arXiv Detail & Related papers (2025-04-14T04:06:46Z) - Accuracy of a Large Language Model in Distinguishing Anti- And Pro-vaccination Messages on Social Media: The Case of Human Papillomavirus Vaccination [1.8434042562191815]
This research assesses the accuracy of ChatGPT for sentiment analysis to discern different stances toward HPV vaccination.
Messages related to HPV vaccination were collected from social media supporting different message formats: Facebook (long format) and Twitter (short format)
Accuracy was measured for each message as the level of concurrence between human and machine decisions, ranging between 0 and 1.
arXiv Detail & Related papers (2024-04-10T04:35:54Z) - Enhancing Medical Support in the Arabic Language Through Personalized ChatGPT Assistance [1.174020933567308]
ChatGPT provides real-time, personalized medical diagnosis at no cost.
The study involved compiling a dataset of disease information and generating multiple messages for each disease.
ChatGPT's performance was assessed by measuring the similarity between its responses and the actual diseases.
arXiv Detail & Related papers (2024-03-21T21:28:07Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Performance of ChatGPT on USMLE: Unlocking the Potential of Large
Language Models for AI-Assisted Medical Education [0.0]
This study determined how reliable ChatGPT can be for answering complex medical and clinical questions.
The paper evaluated the obtained results using a 2-way ANOVA and posthoc analysis.
ChatGPT-generated answers were found to be more context-oriented than regular Google search results.
arXiv Detail & Related papers (2023-06-30T19:53:23Z) - To ChatGPT, or not to ChatGPT: That is the question! [78.407861566006]
This study provides a comprehensive and contemporary assessment of the most recent techniques in ChatGPT detection.
We have curated a benchmark dataset consisting of prompts from ChatGPT and humans, including diverse questions from medical, open Q&A, and finance domains.
Our evaluation results demonstrate that none of the existing methods can effectively detect ChatGPT-generated content.
arXiv Detail & Related papers (2023-04-04T03:04:28Z) - On the Robustness of ChatGPT: An Adversarial and Out-of-distribution
Perspective [67.98821225810204]
We evaluate the robustness of ChatGPT from the adversarial and out-of-distribution perspective.
Results show consistent advantages on most adversarial and OOD classification and translation tasks.
ChatGPT shows astounding performance in understanding dialogue-related texts.
arXiv Detail & Related papers (2023-02-22T11:01:20Z) - A Categorical Archive of ChatGPT Failures [47.64219291655723]
ChatGPT, developed by OpenAI, has been trained using massive amounts of data and simulates human conversation.
It has garnered significant attention due to its ability to effectively answer a broad range of human inquiries.
However, a comprehensive analysis of ChatGPT's failures is lacking, which is the focus of this study.
arXiv Detail & Related papers (2023-02-06T04:21:59Z) - Doctors vs. Nurses: Understanding the Great Divide in Vaccine Hesitancy
among Healthcare Workers [64.1526243118151]
We find that doctors are overall more positive toward the COVID-19 vaccines.
Doctors are more concerned with the effectiveness of the vaccines over newer variants.
Nurses pay more attention to the potential side effects on children.
arXiv Detail & Related papers (2022-09-11T14:22:16Z) - Dynamics and triggers of misinformation on vaccines [0.552480439325792]
We analyze 6 years of Italian vaccine debate across diverse social media platforms (Facebook, Instagram, Twitter, YouTube)
We first use the symbolic transfer entropy analysis of news production time-series to determine which category of sources, questionable or reliable, causally drives the agenda on vaccines.
We then leverage deep learning models capable to accurately classify vaccine-related content based on the conveyed stance and discussed topic.
arXiv Detail & Related papers (2022-07-25T15:35:48Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - A Qualitative Evaluation of Language Models on Automatic
Question-Answering for COVID-19 [4.676651062800037]
COVID-19 has caused more than 7.4 million cases and over 418,000 deaths.
Online communities, forums, and social media provide potential venues to search for relevant questions and answers.
We propose to apply a language model for automatically answering questions related to COVID-19 and qualitatively evaluate the generated responses.
arXiv Detail & Related papers (2020-06-19T05:13:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.