Zero-Shot Stance Detection: A Dataset and Model using Generalized Topic
Representations
- URL: http://arxiv.org/abs/2010.03640v1
- Date: Wed, 7 Oct 2020 20:27:12 GMT
- Title: Zero-Shot Stance Detection: A Dataset and Model using Generalized Topic
Representations
- Authors: Emily Allaway and Kathleen McKeown
- Abstract summary: We present a new dataset for zero-shot stance detection that captures a wider range of topics and lexical variation than in previous datasets.
We also propose a new model for stance detection that implicitly captures relationships between topics using generalized topic representations.
- Score: 13.153001795077227
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stance detection is an important component of understanding hidden influences
in everyday life. Since there are thousands of potential topics to take a
stance on, most with little to no training data, we focus on zero-shot stance
detection: classifying stance from no training examples. In this paper, we
present a new dataset for zero-shot stance detection that captures a wider
range of topics and lexical variation than in previous datasets. Additionally,
we propose a new model for stance detection that implicitly captures
relationships between topics using generalized topic representations and show
that this model improves performance on a number of challenging linguistic
phenomena.
Related papers
- Zero-shot Degree of Ill-posedness Estimation for Active Small Object Change Detection [8.977792536037956]
In everyday indoor navigation, robots often needto detect non-distinctive small-change objects.
Existing techniques rely on high-quality class-specific object priors to regularize a change detector model.
In this study, we explore the concept of degree-of-ill-posedness (DoI) to improve both passive and activevision.
arXiv Detail & Related papers (2024-05-10T01:56:39Z) - TATA: Stance Detection via Topic-Agnostic and Topic-Aware Embeddings [6.0971418973431]
We train topic-agnostic/TAG and topic-aware/TAW embeddings for use in downstream stance detection.
We achieve state-of-the-art performance across several public stance detection datasets.
arXiv Detail & Related papers (2023-10-22T23:23:44Z) - A Comparative Review of Recent Few-Shot Object Detection Algorithms [0.0]
Few-shot object detection, learning to adapt to the novel classes with a few labeled data, is an imperative and long-lasting problem.
Recent studies have explored how to use implicit cues in extra datasets without target-domain supervision to help few-shot detectors refine robust task notions.
arXiv Detail & Related papers (2021-10-30T07:57:11Z) - Perceptual Score: What Data Modalities Does Your Model Perceive? [73.75255606437808]
We introduce the perceptual score, a metric that assesses the degree to which a model relies on the different subsets of the input features.
We find that recent, more accurate multi-modal models for visual question-answering tend to perceive the visual data less than their predecessors.
Using the perceptual score also helps to analyze model biases by decomposing the score into data subset contributions.
arXiv Detail & Related papers (2021-10-27T12:19:56Z) - Adversarial Learning for Zero-Shot Stance Detection on Social Media [1.7702142798241087]
We propose a new model for zero-shot stance detection on Twitter that uses adversarial learning to generalize across topics.
Our model achieves state-of-the-art performance on a number of unseen test topics with minimal computational costs.
arXiv Detail & Related papers (2021-05-14T01:08:48Z) - Semantic Relation Reasoning for Shot-Stable Few-Shot Object Detection [33.25064323136447]
Few-shot object detection is an imperative and long-lasting problem due to the inherent long-tail distribution of real-world data.
This work introduces explicit relation reasoning into the learning of novel object detection.
Experiments show that SRR-FSD can achieve competitive results at higher shots, and more importantly, a significantly better performance given both lower explicit and implicit shots.
arXiv Detail & Related papers (2021-03-02T18:04:38Z) - Closing the Generalization Gap in One-Shot Object Detection [92.82028853413516]
We show that the key to strong few-shot detection models may not lie in sophisticated metric learning approaches, but instead in scaling the number of categories.
Future data annotation efforts should therefore focus on wider datasets and annotate a larger number of categories.
arXiv Detail & Related papers (2020-11-09T09:31:17Z) - Synthesizing the Unseen for Zero-shot Object Detection [72.38031440014463]
We propose to synthesize visual features for unseen classes, so that the model learns both seen and unseen objects in the visual domain.
We use a novel generative model that uses class-semantics to not only generate the features but also to discriminatively separate them.
arXiv Detail & Related papers (2020-10-19T12:36:11Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z) - Any-Shot Object Detection [81.88153407655334]
'Any-shot detection' is where totally unseen and few-shot categories can simultaneously co-occur during inference.
We propose a unified any-shot detection model, that can concurrently learn to detect both zero-shot and few-shot object classes.
Our framework can also be used solely for Zero-shot detection and Few-shot detection tasks.
arXiv Detail & Related papers (2020-03-16T03:43:15Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.