Exploiting Sentiment and Common Sense for Zero-shot Stance Detection
- URL: http://arxiv.org/abs/2208.08797v1
- Date: Thu, 18 Aug 2022 12:27:24 GMT
- Title: Exploiting Sentiment and Common Sense for Zero-shot Stance Detection
- Authors: Yun Luo, Zihan Liu, Yuefeng Shi, Yue Zhang
- Abstract summary: We propose to boost the transferability of the stance detection model by using sentiment and commonsense knowledge.
Our model includes a graph autoencoder module to obtain commonsense knowledge and a stance detection module with sentiment and commonsense.
- Score: 20.620244248582086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The stance detection task aims to classify the stance toward given documents
and topics. Since the topics can be implicit in documents and unseen in
training data for zero-shot settings, we propose to boost the transferability
of the stance detection model by using sentiment and commonsense knowledge,
which are seldom considered in previous studies. Our model includes a graph
autoencoder module to obtain commonsense knowledge and a stance detection
module with sentiment and commonsense. Experimental results show that our model
outperforms the state-of-the-art methods on the zero-shot and few-shot
benchmark dataset--VAST. Meanwhile, ablation studies prove the significance of
each module in our model. Analysis of the relations between sentiment, common
sense, and stance indicates the effectiveness of sentiment and common sense.
Related papers
- Human-Object Interaction Detection Collaborated with Large Relation-driven Diffusion Models [65.82564074712836]
We introduce DIFfusionHOI, a new HOI detector shedding light on text-to-image diffusion models.
We first devise an inversion-based strategy to learn the expression of relation patterns between humans and objects in embedding space.
These learned relation embeddings then serve as textual prompts, to steer diffusion models generate images that depict specific interactions.
arXiv Detail & Related papers (2024-10-26T12:00:33Z) - Scene-Graph ViT: End-to-End Open-Vocabulary Visual Relationship Detection [14.22646492640906]
We propose a simple and highly efficient decoder-free architecture for open-vocabulary visual relationship detection.
Our model consists of a Transformer-based image encoder that represents objects as tokens and models their relationships implicitly.
Our approach achieves state-of-the-art relationship detection performance on Visual Genome and on the large-vocabulary GQA benchmark at real-time inference speeds.
arXiv Detail & Related papers (2024-03-21T10:15:57Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Unified Visual Relationship Detection with Vision and Language Models [89.77838890788638]
This work focuses on training a single visual relationship detector predicting over the union of label spaces from multiple datasets.
We propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models.
Empirical results on both human-object interaction detection and scene-graph generation demonstrate the competitive performance of our model.
arXiv Detail & Related papers (2023-03-16T00:06:28Z) - DisARM: Displacement Aware Relation Module for 3D Detection [38.4380420322491]
Displacement Aware Relation Module (DisARM) is a novel neural network module for enhancing the performance of 3D object detection in point cloud scenes.
To find the anchors, we first perform a preliminary relation anchor module with an objectness-aware sampling approach.
This lightweight relation module leads to significantly higher accuracy of object instance detection when being plugged into the state-of-the-art detectors.
arXiv Detail & Related papers (2022-03-02T14:49:55Z) - A Multi-Level Attention Model for Evidence-Based Fact Checking [58.95413968110558]
We present a simple model that can be trained on sequence structures.
Results on a large-scale dataset for Fact Extraction and VERification show that our model outperforms the graph-based approaches.
arXiv Detail & Related papers (2021-06-02T05:40:12Z) - Semantic Relation Reasoning for Shot-Stable Few-Shot Object Detection [33.25064323136447]
Few-shot object detection is an imperative and long-lasting problem due to the inherent long-tail distribution of real-world data.
This work introduces explicit relation reasoning into the learning of novel object detection.
Experiments show that SRR-FSD can achieve competitive results at higher shots, and more importantly, a significantly better performance given both lower explicit and implicit shots.
arXiv Detail & Related papers (2021-03-02T18:04:38Z) - Generalized Zero-shot Intent Detection via Commonsense Knowledge [5.398580049917152]
We propose RIDE: an intent detection model that leverages commonsense knowledge in an unsupervised fashion to overcome the issue of training data scarcity.
RIDE computes robust and generalizable relationship meta-features that capture deep semantic relationships between utterances and intent labels.
Our extensive experimental analysis on three widely-used intent detection benchmarks shows that relationship meta-features significantly increase the accuracy of detecting both seen and unseen intents.
arXiv Detail & Related papers (2021-02-04T23:36:41Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Zero-Shot Stance Detection: A Dataset and Model using Generalized Topic
Representations [13.153001795077227]
We present a new dataset for zero-shot stance detection that captures a wider range of topics and lexical variation than in previous datasets.
We also propose a new model for stance detection that implicitly captures relationships between topics using generalized topic representations.
arXiv Detail & Related papers (2020-10-07T20:27:12Z) - Visual Relationship Detection with Visual-Linguistic Knowledge from
Multimodal Representations [103.00383924074585]
Visual relationship detection aims to reason over relationships among salient objects in images.
We propose a novel approach named Visual-Linguistic Representations from Transformers (RVL-BERT)
RVL-BERT performs spatial reasoning with both visual and language commonsense knowledge learned via self-supervised pre-training.
arXiv Detail & Related papers (2020-09-10T16:15:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.