Few-shot Learning for Cross-Target Stance Detection by Aggregating
Multimodal Embeddings
- URL: http://arxiv.org/abs/2301.04535v2
- Date: Fri, 31 Mar 2023 12:39:09 GMT
- Title: Few-shot Learning for Cross-Target Stance Detection by Aggregating
Multimodal Embeddings
- Authors: Parisa Jamadi Khiabani, Arkaitz Zubiaga
- Abstract summary: We introduce CT-TN, a novel model that aggregates multimodal embeddings from both textual and network features of the data.
We conduct experiments in a few-shot cross-target scenario on six different combinations of source-destination target pairs.
Experiments with different numbers of shots show that CT-TN can outperform other models after seeing 300 instances of the destination target.
- Score: 16.39344929765961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the increasing popularity of the stance detection task, existing
approaches are predominantly limited to using the textual content of social
media posts for the classification, overlooking the social nature of the task.
The stance detection task becomes particularly challenging in cross-target
classification scenarios, where even in few-shot training settings the model
needs to predict the stance towards new targets for which the model has only
seen few relevant samples during training. To address the cross-target stance
detection in social media by leveraging the social nature of the task, we
introduce CT-TN, a novel model that aggregates multimodal embeddings derived
from both textual and network features of the data. We conduct experiments in a
few-shot cross-target scenario on six different combinations of
source-destination target pairs. By comparing CT-TN with state-of-the-art
cross-target stance detection models, we demonstrate the effectiveness of our
model by achieving average performance improvements ranging from 11% to 21%
across different baseline models. Experiments with different numbers of shots
show that CT-TN can outperform other models after seeing 300 instances of the
destination target. Further, ablation experiments demonstrate the positive
contribution of each of the components of CT-TN towards the final performance.
We further analyse the network interactions between social media users, which
reveal the potential of using social features for cross-target stance
detection.
Related papers
- Cross-Target Stance Detection: A Survey of Techniques, Datasets, and Challenges [7.242609314791262]
Cross-target stance detection is the task of determining the viewpoint expressed in a text towards a given target.
With the increasing need to analyze and mining viewpoints and opinions online, the task has recently seen a significant surge in interest.
This review paper examines the advancements in cross-target stance detection over the last decade.
arXiv Detail & Related papers (2024-09-20T15:49:14Z) - Investigating the Robustness of Modelling Decisions for Few-Shot Cross-Topic Stance Detection: A Preregistered Study [3.9394231697721023]
In this paper, we investigate the robustness of operationalization choices for few-shot stance detection.
We compare stance task definitions (Pro/Con versus Same Side Stance), two LLM architectures (bi-encoding versus cross-encoding), and adding Natural Language Inference knowledge.
Some of our hypotheses and claims from earlier work can be confirmed, while others give more inconsistent results.
arXiv Detail & Related papers (2024-04-05T09:48:00Z) - MTP: Advancing Remote Sensing Foundation Model via Multi-Task Pretraining [73.81862342673894]
Foundation models have reshaped the landscape of Remote Sensing (RS) by enhancing various image interpretation tasks.
transferring the pretrained models to downstream tasks may encounter task discrepancy due to their formulation of pretraining as image classification or object discrimination tasks.
We conduct multi-task supervised pretraining on the SAMRS dataset, encompassing semantic segmentation, instance segmentation, and rotated object detection.
Our models are finetuned on various RS downstream tasks, such as scene classification, horizontal and rotated object detection, semantic segmentation, and change detection.
arXiv Detail & Related papers (2024-03-20T09:17:22Z) - Domain-adaptive Person Re-identification without Cross-camera Paired
Samples [12.041823465553875]
Cross-camera pedestrian samples collected from long-distance scenes often have no positive samples.
It is extremely challenging to use cross-camera negative samples to achieve cross-region pedestrian identity matching.
A novel domain-adaptive person re-ID method that focuses on cross-camera consistent discriminative feature learning is proposed.
arXiv Detail & Related papers (2023-07-13T02:42:28Z) - Unified Visual Relationship Detection with Vision and Language Models [89.77838890788638]
This work focuses on training a single visual relationship detector predicting over the union of label spaces from multiple datasets.
We propose UniVRD, a novel bottom-up method for Unified Visual Relationship Detection by leveraging vision and language models.
Empirical results on both human-object interaction detection and scene-graph generation demonstrate the competitive performance of our model.
arXiv Detail & Related papers (2023-03-16T00:06:28Z) - Instance-Level Relative Saliency Ranking with Graph Reasoning [126.09138829920627]
We present a novel unified model to segment salient instances and infer relative saliency rank order.
A novel loss function is also proposed to effectively train the saliency ranking branch.
experimental results demonstrate that our proposed model is more effective than previous methods.
arXiv Detail & Related papers (2021-07-08T13:10:42Z) - CDN-MEDAL: Two-stage Density and Difference Approximation Framework for
Motion Analysis [3.337126420148156]
We propose a novel, two-stage method of change detection with two convolutional neural networks.
Our two-stage framework contains approximately 3.5K parameters in total but still maintains rapid convergence to intricate motion patterns.
arXiv Detail & Related papers (2021-06-07T16:39:42Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z) - End-to-End 3D Multi-Object Tracking and Trajectory Forecasting [34.68114553744956]
We propose a unified solution for 3D MOT and trajectory forecasting.
We employ a feature interaction technique by introducing Graph Neural Networks.
We also use a diversity sampling function to improve the quality and diversity of our forecasted trajectories.
arXiv Detail & Related papers (2020-08-25T16:54:46Z) - CSI: Novelty Detection via Contrastive Learning on Distributionally
Shifted Instances [77.28192419848901]
We propose a simple, yet effective method named contrasting shifted instances (CSI)
In addition to contrasting a given sample with other instances as in conventional contrastive learning methods, our training scheme contrasts the sample with distributionally-shifted augmentations of itself.
Our experiments demonstrate the superiority of our method under various novelty detection scenarios.
arXiv Detail & Related papers (2020-07-16T08:32:56Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.