Adversarial Learning for Zero-Shot Stance Detection on Social Media
- URL: http://arxiv.org/abs/2105.06603v1
- Date: Fri, 14 May 2021 01:08:48 GMT
- Title: Adversarial Learning for Zero-Shot Stance Detection on Social Media
- Authors: Emily Allaway, Malavika Srikanth, and Kathleen McKeown
- Abstract summary: We propose a new model for zero-shot stance detection on Twitter that uses adversarial learning to generalize across topics.
Our model achieves state-of-the-art performance on a number of unseen test topics with minimal computational costs.
- Score: 1.7702142798241087
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stance detection on social media can help to identify and understand slanted
news or commentary in everyday life. In this work, we propose a new model for
zero-shot stance detection on Twitter that uses adversarial learning to
generalize across topics. Our model achieves state-of-the-art performance on a
number of unseen test topics with minimal computational costs. In addition, we
extend zero-shot stance detection to new topics, highlighting future directions
for zero-shot transfer.
Related papers
- A Comprehensive Review of Few-shot Action Recognition [64.47305887411275]
Few-shot action recognition aims to address the high cost and impracticality of manually labeling complex and variable video data.
It requires accurately classifying human actions in videos using only a few labeled examples per class.
arXiv Detail & Related papers (2024-07-20T03:53:32Z) - Stance Reasoner: Zero-Shot Stance Detection on Social Media with Explicit Reasoning [10.822701164802307]
We present Stance Reasoner, an approach to zero-shot stance detection on social media.
We use a pre-trained language model as a source of world knowledge, with the chain-of-thought in-context learning approach to generate intermediate reasoning steps.
Stance Reasoner outperforms the current state-of-the-art models on 3 Twitter datasets.
arXiv Detail & Related papers (2024-03-22T00:58:28Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Improved Target-specific Stance Detection on Social Media Platforms by
Delving into Conversation Threads [12.007570049217398]
We propose a new task called conversational stance detection.
It infers the stance towards a given target (e.g., COVID-19 vaccination) when given a data instance and its corresponding conversation thread.
To infer the desired stances from both data instances and conversation threads, we propose a model called Branch-BERT that incorporates contextual information in conversation threads.
arXiv Detail & Related papers (2022-11-06T08:40:48Z) - Exploiting Sentiment and Common Sense for Zero-shot Stance Detection [20.620244248582086]
We propose to boost the transferability of the stance detection model by using sentiment and commonsense knowledge.
Our model includes a graph autoencoder module to obtain commonsense knowledge and a stance detection module with sentiment and commonsense.
arXiv Detail & Related papers (2022-08-18T12:27:24Z) - Incremental-DETR: Incremental Few-Shot Object Detection via
Self-Supervised Learning [60.64535309016623]
We propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector.
To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision.
We further introduce a incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without catastrophic forgetting.
arXiv Detail & Related papers (2022-05-09T05:08:08Z) - Entity-Conditioned Question Generation for Robust Attention Distribution
in Neural Information Retrieval [51.53892300802014]
We show that supervised neural information retrieval models are prone to learning sparse attention patterns over passage tokens.
Using a novel targeted synthetic data generation method, we teach neural IR to attend more uniformly and robustly to all entities in a given passage.
arXiv Detail & Related papers (2022-04-24T22:36:48Z) - Robust Region Feature Synthesizer for Zero-Shot Object Detection [87.79902339984142]
We build a novel zero-shot object detection framework that contains an Intra-class Semantic Diverging component and an Inter-class Structure Preserving component.
It is the first study to carry out zero-shot object detection in remote sensing imagery.
arXiv Detail & Related papers (2022-01-01T03:09:15Z) - Zero-Shot Stance Detection: A Dataset and Model using Generalized Topic
Representations [13.153001795077227]
We present a new dataset for zero-shot stance detection that captures a wider range of topics and lexical variation than in previous datasets.
We also propose a new model for stance detection that implicitly captures relationships between topics using generalized topic representations.
arXiv Detail & Related papers (2020-10-07T20:27:12Z) - Any-Shot Object Detection [81.88153407655334]
'Any-shot detection' is where totally unseen and few-shot categories can simultaneously co-occur during inference.
We propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes.
Our framework can also be used solely for Zero-shot detection and Few-shot detection tasks.
arXiv Detail & Related papers (2020-03-16T03:43:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.