Cross-target Stance Detection by Exploiting Target Analytical
Perspectives
- URL: http://arxiv.org/abs/2401.01761v2
- Date: Thu, 4 Jan 2024 01:30:36 GMT
- Title: Cross-target Stance Detection by Exploiting Target Analytical
Perspectives
- Authors: Daijun Ding, Rong Chen, Liwen Jing, Bowen Zhang, Xu Huang, Li Dong,
Xiaowen Zhao, Ge Song
- Abstract summary: Cross-target stance detection (CTSD) is an important task, which infers the attitude of the destination target by utilizing annotated data from the source target.
One important approach in CTSD is to extract domain-invariant features to bridge the knowledge gap between multiple targets.
We propose a Multi-Perspective Prompt-Tuning (MPPT) model for CTSD that uses the analysis perspective as a bridge to transfer knowledge.
- Score: 22.320628580895164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-target stance detection (CTSD) is an important task, which infers the
attitude of the destination target by utilizing annotated data derived from the
source target. One important approach in CTSD is to extract domain-invariant
features to bridge the knowledge gap between multiple targets. However, the
analysis of informal and short text structure, and implicit expressions,
complicate the extraction of domain-invariant knowledge. In this paper, we
propose a Multi-Perspective Prompt-Tuning (MPPT) model for CTSD that uses the
analysis perspective as a bridge to transfer knowledge. First, we develop a
two-stage instruct-based chain-of-thought method (TsCoT) to elicit target
analysis perspectives and provide natural language explanations (NLEs) from
multiple viewpoints by formulating instructions based on large language model
(LLM). Second, we propose a multi-perspective prompt-tuning framework
(MultiPLN) to fuse the NLEs into the stance predictor. Extensive experiments
results demonstrate the superiority of MPPT against the state-of-the-art
baseline methods.
Related papers
- Localization and Expansion: A Decoupled Framework for Point Cloud Few-shot Semantic Segmentation [39.7657197805346]
Point cloud few-shot semantic segmentation (PC-FSS) aims to segment targets of novel categories in a given query point cloud with only a few annotated support samples.
We propose a simple yet effective framework in the spirit of Decoupled Localization and Expansion (DLE)
DLE, including a structural localization module (SLM) and a self-expansion module (SEM), enjoys several merits.
arXiv Detail & Related papers (2024-08-25T07:34:32Z) - How to Understand "Support"? An Implicit-enhanced Causal Inference
Approach for Weakly-supervised Phrase Grounding [18.97081348819219]
Weakly-supervised Phrase Grounding (WPG) is an emerging task of inferring the fine-grained phrase-region matching.
This paper proposes an Implicit-Enhanced Causal Inference approach to address the challenges of modeling the implicit relations.
arXiv Detail & Related papers (2024-02-29T12:49:48Z) - A Novel Energy based Model Mechanism for Multi-modal Aspect-Based
Sentiment Analysis [85.77557381023617]
We propose a novel framework called DQPSA for multi-modal sentiment analysis.
PDQ module uses the prompt as both a visual query and a language query to extract prompt-aware visual information.
EPE module models the boundaries pairing of the analysis target from the perspective of an Energy-based Model.
arXiv Detail & Related papers (2023-12-13T12:00:46Z) - Mutual Information Regularization for Weakly-supervised RGB-D Salient
Object Detection [33.210575826086654]
We present a weakly-supervised RGB-D salient object detection model via supervision.
We focus on effective multimodal representation learning via inter-modal mutual information regularization.
arXiv Detail & Related papers (2023-06-06T12:36:57Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - CLIP the Gap: A Single Domain Generalization Approach for Object
Detection [60.20931827772482]
Single Domain Generalization tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain.
We propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts.
We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss.
arXiv Detail & Related papers (2023-01-13T12:01:18Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Recent Advances in Embedding Methods for Multi-Object Tracking: A Survey [71.10448142010422]
Multi-object tracking (MOT) aims to associate target objects across video frames in order to obtain entire moving trajectories.
Embedding methods play an essential role in object location estimation and temporal identity association in MOT.
We first conduct a comprehensive overview with in-depth analysis for embedding methods in MOT from seven different perspectives.
arXiv Detail & Related papers (2022-05-22T06:54:33Z) - Detecting Human-Object Interactions with Object-Guided Cross-Modal
Calibrated Semantics [6.678312249123534]
We aim to boost end-to-end models with object-guided statistical priors.
We propose to utilize a Verb Semantic Model (VSM) and use semantic aggregation to profit from this object-guided hierarchy.
The above modules combined composes Object-guided Cross-modal Network (OCN)
arXiv Detail & Related papers (2022-02-01T07:39:04Z) - SOSD-Net: Joint Semantic Object Segmentation and Depth Estimation from
Monocular images [94.36401543589523]
We introduce the concept of semantic objectness to exploit the geometric relationship of these two tasks.
We then propose a Semantic Object and Depth Estimation Network (SOSD-Net) based on the objectness assumption.
To the best of our knowledge, SOSD-Net is the first network that exploits the geometry constraint for simultaneous monocular depth estimation and semantic segmentation.
arXiv Detail & Related papers (2021-01-19T02:41:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.