Improved Target-specific Stance Detection on Social Media Platforms by
Delving into Conversation Threads
- URL: http://arxiv.org/abs/2211.03061v1
- Date: Sun, 6 Nov 2022 08:40:48 GMT
- Title: Improved Target-specific Stance Detection on Social Media Platforms by
Delving into Conversation Threads
- Authors: Yupeng Li, Haorui He, Shaonan Wang, Francis C.M. Lau, and Yunya Song
- Abstract summary: We propose a new task called conversational stance detection.
It infers the stance towards a given target (e.g., COVID-19 vaccination) when given a data instance and its corresponding conversation thread.
To infer the desired stances from both data instances and conversation threads, we propose a model called Branch-BERT that incorporates contextual information in conversation threads.
- Score: 12.007570049217398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Target-specific stance detection on social media, which aims at classifying a
textual data instance such as a post or a comment into a stance class of a
target issue, has become an emerging opinion mining paradigm of importance. An
example application would be to overcome vaccine hesitancy in combating the
coronavirus pandemic. However, existing stance detection strategies rely merely
on the individual instances which cannot always capture the expressed stance of
a given target. In response, we address a new task called conversational stance
detection which is to infer the stance towards a given target (e.g., COVID-19
vaccination) when given a data instance and its corresponding conversation
thread. To tackle the task, we first propose a benchmarking conversational
stance detection (CSD) dataset with annotations of stances and the structures
of conversation threads among the instances based on six major social media
platforms in Hong Kong. To infer the desired stances from both data instances
and conversation threads, we propose a model called Branch-BERT that
incorporates contextual information in conversation threads. Extensive
experiments on our CSD dataset show that our proposed model outperforms all the
baseline models that do not make use of contextual information. Specifically,
it improves the F1 score by 10.3% compared with the state-of-the-art method in
the SemEval-2016 Task 6 competition. This shows the potential of incorporating
rich contextual information on detecting target-specific stances on social
media platforms and implies a more practical way to construct future stance
detection tasks.
Related papers
- Multimodal Multi-turn Conversation Stance Detection: A Challenge Dataset and Effective Model [9.413870182630362]
We introduce a new multimodal multi-turn conversational stance detection dataset (called MmMtCSD)
We propose a novel multimodal large language model stance detection framework (MLLM-SD), that learns joint stance representations from textual and visual modalities.
Experiments on MmMtCSD show state-of-the-art performance of our proposed MLLM-SD approach for multimodal stance detection.
arXiv Detail & Related papers (2024-09-01T03:16:30Z) - Stance Reasoner: Zero-Shot Stance Detection on Social Media with Explicit Reasoning [10.822701164802307]
We present Stance Reasoner, an approach to zero-shot stance detection on social media.
We use a pre-trained language model as a source of world knowledge, with the chain-of-thought in-context learning approach to generate intermediate reasoning steps.
Stance Reasoner outperforms the current state-of-the-art models on 3 Twitter datasets.
arXiv Detail & Related papers (2024-03-22T00:58:28Z) - A Challenge Dataset and Effective Models for Conversational Stance Detection [26.208989232347058]
We introduce a new multi-turn conversation stance detection dataset (called textbfMT-CSD)
We propose a global-local attention network (textbfGLAN) to address both long and short-range dependencies inherent in conversational data.
Our dataset serves as a valuable resource to catalyze advancements in cross-domain stance detection.
arXiv Detail & Related papers (2024-03-17T08:51:01Z) - SocialPET: Socially Informed Pattern Exploiting Training for Few-Shot
Stance Detection in Social Media [8.556183465416156]
Stance detection is the task of determining the viewpoint of a social media post towards a target as 'favor' or 'against'
SocialPET is a socially informed approach to leveraging language models for the task.
We prove the effectiveness of SocialPET on two stance datasets, Multi-target and P-Stance.
arXiv Detail & Related papers (2024-03-08T11:00:09Z) - Multi-modal Stance Detection: New Datasets and Model [56.97470987479277]
We study multi-modal stance detection for tweets consisting of texts and images.
We propose a simple yet effective Targeted Multi-modal Prompt Tuning framework (TMPT)
TMPT achieves state-of-the-art performance in multi-modal stance detection.
arXiv Detail & Related papers (2024-02-22T05:24:19Z) - Integrating Self-supervised Speech Model with Pseudo Word-level Targets
from Visually-grounded Speech Model [57.78191634042409]
We propose Pseudo-Word HuBERT (PW-HuBERT), a framework that integrates pseudo word-level targets into the training process.
Our experimental results on four spoken language understanding (SLU) benchmarks suggest the superiority of our model in capturing semantic information.
arXiv Detail & Related papers (2024-02-08T16:55:21Z) - Few-shot Learning for Cross-Target Stance Detection by Aggregating
Multimodal Embeddings [16.39344929765961]
We introduce CT-TN, a novel model that aggregates multimodal embeddings from both textual and network features of the data.
We conduct experiments in a few-shot cross-target scenario on six different combinations of source-destination target pairs.
Experiments with different numbers of shots show that CT-TN can outperform other models after seeing 300 instances of the destination target.
arXiv Detail & Related papers (2023-01-11T15:52:55Z) - Contextual information integration for stance detection via
cross-attention [59.662413798388485]
Stance detection deals with identifying an author's stance towards a target.
Most existing stance detection models are limited because they do not consider relevant contextual information.
We propose an approach to integrate contextual information as text.
arXiv Detail & Related papers (2022-11-03T15:04:29Z) - Few-Shot Stance Detection via Target-Aware Prompt Distillation [48.40269795901453]
This paper is inspired by the potential capability of pre-trained language models (PLMs) serving as knowledge bases and few-shot learners.
PLMs can provide essential contextual information for the targets and enable few-shot learning via prompts.
Considering the crucial role of the target in stance detection task, we design target-aware prompts and propose a novel verbalizer.
arXiv Detail & Related papers (2022-06-27T12:04:14Z) - Unpaired Referring Expression Grounding via Bidirectional Cross-Modal
Matching [53.27673119360868]
Referring expression grounding is an important and challenging task in computer vision.
We propose a novel bidirectional cross-modal matching (BiCM) framework to address these challenges.
Our framework outperforms previous works by 6.55% and 9.94% on two popular grounding datasets.
arXiv Detail & Related papers (2022-01-18T01:13:19Z) - Exploiting Unsupervised Data for Emotion Recognition in Conversations [76.01690906995286]
Emotion Recognition in Conversations (ERC) aims to predict the emotional state of speakers in conversations.
The available supervised data for the ERC task is limited.
We propose a novel approach to leverage unsupervised conversation data.
arXiv Detail & Related papers (2020-10-02T13:28:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.