Leveraging Prompt-Based Large Language Models: Predicting Pandemic
Health Decisions and Outcomes Through Social Media Language
- URL: http://arxiv.org/abs/2403.00994v1
- Date: Fri, 1 Mar 2024 21:29:32 GMT
- Title: Leveraging Prompt-Based Large Language Models: Predicting Pandemic
Health Decisions and Outcomes Through Social Media Language
- Authors: Xiaohan Ding, Buse Carik, Uma Sushmitha Gunturi, Valerie Reyna, and
Eugenia H. Rho
- Abstract summary: We use prompt-based LLMs to examine the relationship between social media language patterns and trends in national health outcomes.
Our work is the first to empirically link social media linguistic patterns to real-world public health trends.
- Score: 6.3576870613251675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a multi-step reasoning framework using prompt-based LLMs to
examine the relationship between social media language patterns and trends in
national health outcomes. Grounded in fuzzy-trace theory, which emphasizes the
importance of gists of causal coherence in effective health communication, we
introduce Role-Based Incremental Coaching (RBIC), a prompt-based LLM framework,
to identify gists at-scale. Using RBIC, we systematically extract gists from
subreddit discussions opposing COVID-19 health measures (Study 1). We then
track how these gists evolve across key events (Study 2) and assess their
influence on online engagement (Study 3). Finally, we investigate how the
volume of gists is associated with national health trends like vaccine uptake
and hospitalizations (Study 4). Our work is the first to empirically link
social media linguistic patterns to real-world public health trends,
highlighting the potential of prompt-based LLMs in identifying critical online
discussion patterns that can form the basis of public health communication
strategies.
Related papers
- Improving and Assessing the Fidelity of Large Language Models Alignment to Online Communities [5.392300313326522]
Large language models (LLMs) have shown promise in representing individuals and communities.
This paper presents a framework for aligning LLMs with online communities via instruction-tuning.
We demonstrate the utility of our approach by applying it to online communities centered on dieting and body image.
arXiv Detail & Related papers (2024-08-18T05:41:36Z) - Graph-Augmented LLMs for Personalized Health Insights: A Case Study in Sleep Analysis [2.303486126296845]
Large Language Models (LLMs) have shown promise in delivering interactive health advice.
Traditional methods like Retrieval-Augmented Generation (RAG) and fine-tuning often fail to fully utilize the complex, multi-dimensional, and temporally relevant data.
This paper introduces a graph-augmented LLM framework designed to significantly enhance the personalization and clarity of health insights.
arXiv Detail & Related papers (2024-06-24T01:22:54Z) - Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - Zero-shot Causal Graph Extrapolation from Text via LLMs [50.596179963913045]
We evaluate the ability of large language models (LLMs) to infer causal relations from natural language.
LLMs show competitive performance in a benchmark of pairwise relations without needing (explicit) training samples.
We extend our approach to extrapolating causal graphs through iterated pairwise queries.
arXiv Detail & Related papers (2023-12-22T13:14:38Z) - KNSE: A Knowledge-aware Natural Language Inference Framework for
Dialogue Symptom Status Recognition [69.78432481474572]
We propose a novel framework called KNSE for symptom status recognition (SSR)
For each mentioned symptom in a dialogue window, we first generate knowledge about the symptom and hypothesis about status of the symptom, to form a (premise, knowledge, hypothesis) triplet.
The BERT model is then used to encode the triplet, which is further processed by modules including utterance aggregation, self-attention, cross-attention, and GRU to predict the symptom status.
arXiv Detail & Related papers (2023-05-26T11:23:26Z) - Cross-Modal Causal Intervention for Medical Report Generation [109.83549148448469]
Medical report generation (MRG) is essential for computer-aided diagnosis and medication guidance.
Due to the spurious correlations within image-text data induced by visual and linguistic biases, it is challenging to generate accurate reports reliably describing lesion areas.
We propose a novel Visual-Linguistic Causal Intervention (VLCI) framework for MRG, which consists of a visual deconfounding module (VDM) and a linguistic deconfounding module (LDM)
arXiv Detail & Related papers (2023-03-16T07:23:55Z) - NLP as a Lens for Causal Analysis and Perception Mining to Infer Mental
Health on Social Media [10.342474142256842]
We argue that more consequential and explainable research is required for optimal impact on clinical psychology practice and personalized mental healthcare.
Within the scope of Natural Language Processing (NLP), we explore critical areas of inquiry associated with Causal analysis and Perception mining.
We advocate for a more explainable approach toward modeling computational psychology problems through the lens of language.
arXiv Detail & Related papers (2023-01-26T09:26:01Z) - Adversarial Learning-based Stance Classifier for COVID-19-related Health
Policies [14.558584240713154]
We propose an adversarial learning-based stance classifier to automatically identify the public's attitudes toward COVID-19-related health policies.
To enhance the model's deeper understanding, we incorporate policy descriptions as external knowledge into the model.
We evaluate the performance of a broad range of baselines on the stance detection task for COVID-19-related health policies.
arXiv Detail & Related papers (2022-09-10T10:27:21Z) - MET: Multimodal Perception of Engagement for Telehealth [52.54282887530756]
We present MET, a learning-based algorithm for perceiving a human's level of engagement from videos.
We release a new dataset, MEDICA, for mental health patient engagement detection.
arXiv Detail & Related papers (2020-11-17T15:18:38Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - Characterizing Sociolinguistic Variation in the Competing Vaccination
Communities [9.72602429875255]
"Framing" and "personalization" of the message is one of the key features for devising a persuasive messaging strategy.
In the context of health-related misinformation, vaccination remains to be the most prevalent topic of discord.
We conduct a sociolinguistic analysis of the two competing vaccination communities on Twitter.
arXiv Detail & Related papers (2020-06-08T03:05:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.