Disentangled Learning of Stance and Aspect Topics for Vaccine Attitude
Detection in Social Media
- URL: http://arxiv.org/abs/2205.03296v1
- Date: Fri, 6 May 2022 15:24:33 GMT
- Title: Disentangled Learning of Stance and Aspect Topics for Vaccine Attitude
Detection in Social Media
- Authors: Lixing Zhu and Zheng Fang and Gabriele Pergola and Rob Procter and
Yulan He
- Abstract summary: We propose a novel semi-supervised approach for vaccine attitude detection, called VADet.
VADet is able to learn disentangled stance and aspect topics, and outperforms existing aspect-based sentiment analysis models on both stance detection and tweet clustering.
- Score: 40.61499595293957
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Building models to detect vaccine attitudes on social media is challenging
because of the composite, often intricate aspects involved, and the limited
availability of annotated data. Existing approaches have relied heavily on
supervised training that requires abundant annotations and pre-defined aspect
categories. Instead, with the aim of leveraging the large amount of unannotated
data now available on vaccination, we propose a novel semi-supervised approach
for vaccine attitude detection, called VADet. A variational autoencoding
architecture based on language models is employed to learn from unlabelled data
the topical information of the domain. Then, the model is fine-tuned with a few
manually annotated examples of user attitudes. We validate the effectiveness of
VADet on our annotated data and also on an existing vaccination corpus
annotated with opinions on vaccines. Our results show that VADet is able to
learn disentangled stance and aspect topics, and outperforms existing
aspect-based sentiment analysis models on both stance detection and tweet
clustering.
Related papers
- Optimizing Social Media Annotation of HPV Vaccine Skepticism and Misinformation Using Large Language Models: An Experimental Evaluation of In-Context Learning and Fine-Tuning Stance Detection Across Multiple Models [10.2201516537852]
We experimentally determine optimal strategies for scaling up social media content annotation for stance detection on HPV vaccine-related tweets.
In general, in-context learning outperforms fine-tuning in stance detection for HPV vaccine social media content.
arXiv Detail & Related papers (2024-11-22T04:19:32Z) - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Contrastive Learning with Counterfactual Explanations for Radiology Report Generation [83.30609465252441]
We propose a textbfCountertextbfFactual textbfExplanations-based framework (CoFE) for radiology report generation.
Counterfactual explanations serve as a potent tool for understanding how decisions made by algorithms can be changed by asking what if'' scenarios.
Experiments on two benchmarks demonstrate that leveraging the counterfactual explanations enables CoFE to generate semantically coherent and factually complete reports.
arXiv Detail & Related papers (2024-07-19T17:24:25Z) - Hierarchical Multi-Label Classification of Online Vaccine Concerns [8.271202196208]
Vaccine concerns are an ever-evolving target, and can shift quickly as seen during the COVID-19 pandemic.
We explore the task of detecting vaccine concerns in online discourse using large language models (LLMs) in a zero-shot setting without the need for expensive training datasets.
arXiv Detail & Related papers (2024-02-01T20:56:07Z) - Dense Feature Memory Augmented Transformers for COVID-19 Vaccination
Search Classification [60.49594822215981]
This paper presents a classification model for detecting COVID-19 vaccination related search queries.
We propose a novel approach of considering dense features as memory tokens that the model can attend to.
We show that this new modeling approach enables a significant improvement to the Vaccine Search Insights (VSI) task.
arXiv Detail & Related papers (2022-12-16T13:57:41Z) - "Double vaccinated, 5G boosted!": Learning Attitudes towards COVID-19
Vaccination from Social Media [4.178929174617172]
We leverage the textual posts on social media to extract and track users' vaccination stances in near real time.
We integrate the recent posts of a user's social network neighbours to help detect the user's genuine attitude.
Based on our annotated dataset from Twitter, the models instantiated from our framework can increase the performance of attitude extraction by up to 23%.
arXiv Detail & Related papers (2022-06-27T17:04:56Z) - Insta-VAX: A Multimodal Benchmark for Anti-Vaccine and Misinformation
Posts Detection on Social Media [32.252687203366605]
Anti-vaccine posts on social media have been shown to create confusion and reduce the publics confidence in vaccines.
Insta-VAX is a new multi-modal dataset consisting of a sample of 64,957 Instagram posts related to human vaccines.
arXiv Detail & Related papers (2021-12-15T20:34:57Z) - Classifying vaccine sentiment tweets by modelling domain-specific
representation and commonsense knowledge into context-aware attentive GRU [9.8215089151757]
Vaccine hesitancy and refusal can create clusters of low vaccine coverage and reduce the effectiveness of vaccination programs.
Social media provides an opportunity to estimate emerging risks to vaccine acceptance by including geographical location and detailing vaccine-related concerns.
Methods for classifying social media posts, such as vaccine-related tweets, use language models (LMs) trained on general domain text.
We present a novel end-to-end framework consisting of interconnected components that use domain-specific LM trained on vaccine-related tweets and models commonsense knowledge into a bidirectional gated recurrent network (CK-BiGRU) with context-aware attention.
arXiv Detail & Related papers (2021-06-17T15:16:08Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z) - Text Mining to Identify and Extract Novel Disease Treatments From
Unstructured Datasets [56.38623317907416]
We use Google Cloud to transcribe podcast episodes of an NPR radio show.
We then build a pipeline for systematically pre-processing the text.
Our model successfully identified that Omeprazole can help treat heartburn.
arXiv Detail & Related papers (2020-10-22T19:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.