Guiding Computational Stance Detection with Expanded Stance Triangle
Framework
- URL: http://arxiv.org/abs/2305.19845v1
- Date: Wed, 31 May 2023 13:33:29 GMT
- Title: Guiding Computational Stance Detection with Expanded Stance Triangle
Framework
- Authors: Zhengyuan Liu, Yong Keong Yap, Hai Leong Chieu, Nancy F. Chen
- Abstract summary: Stance detection determines whether the author of a piece of text is in favor of, against, or neutral towards a specified target.
We decompose the stance detection task from a linguistic perspective, and investigate key components and inference paths in this task.
- Score: 25.2980607215715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stance detection determines whether the author of a piece of text is in favor
of, against, or neutral towards a specified target, and can be used to gain
valuable insights into social media. The ubiquitous indirect referral of
targets makes this task challenging, as it requires computational solutions to
model semantic features and infer the corresponding implications from a literal
statement. Moreover, the limited amount of available training data leads to
subpar performance in out-of-domain and cross-target scenarios, as data-driven
approaches are prone to rely on superficial and domain-specific features. In
this work, we decompose the stance detection task from a linguistic
perspective, and investigate key components and inference paths in this task.
The stance triangle is a generic linguistic framework previously proposed to
describe the fundamental ways people express their stance. We further expand it
by characterizing the relationship between explicit and implicit objects. We
then use the framework to extend one single training corpus with additional
annotation. Experimental results show that strategically-enriched data can
significantly improve the performance on out-of-domain and cross-target
evaluation.
Related papers
- A Bottom-Up Approach to Class-Agnostic Image Segmentation [4.086366531569003]
We present a novel bottom-up formulation for addressing the class-agnostic segmentation problem.
We supervise our network directly on the projective sphere of its feature space.
Our bottom-up formulation exhibits exceptional generalization capability, even when trained on datasets designed for class-based segmentation.
arXiv Detail & Related papers (2024-09-20T17:56:02Z) - Topic-driven Distant Supervision Framework for Macro-level Discourse
Parsing [72.14449502499535]
The task of analyzing the internal rhetorical structure of texts is a challenging problem in natural language processing.
Despite the recent advances in neural models, the lack of large-scale, high-quality corpora for training remains a major obstacle.
Recent studies have attempted to overcome this limitation by using distant supervision.
arXiv Detail & Related papers (2023-05-23T07:13:51Z) - From Patches to Objects: Exploiting Spatial Reasoning for Better Visual
Representations [2.363388546004777]
We propose a novel auxiliary pretraining method that is based on spatial reasoning.
Our proposed method takes advantage of a more flexible formulation of contrastive learning by introducing spatial reasoning as an auxiliary task for discriminative self-supervised methods.
arXiv Detail & Related papers (2023-05-21T07:46:46Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis [26.974822569543786]
We propose a soft prompt-based joint learning method for cross domain aspect term extraction.
By incorporating external linguistic features, the proposed method learn domain-invariant representations between source and target domains.
Experiments are conducted on the benchmark datasets and the experimental results demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2023-03-01T20:33:37Z) - CLIP the Gap: A Single Domain Generalization Approach for Object
Detection [60.20931827772482]
Single Domain Generalization tackles the problem of training a model on a single source domain so that it generalizes to any unseen target domain.
We propose to leverage a pre-trained vision-language model to introduce semantic domain concepts via textual prompts.
We achieve this via a semantic augmentation strategy acting on the features extracted by the detector backbone, as well as a text-based classification loss.
arXiv Detail & Related papers (2023-01-13T12:01:18Z) - Contextual information integration for stance detection via
cross-attention [59.662413798388485]
Stance detection deals with identifying an author's stance towards a target.
Most existing stance detection models are limited because they do not consider relevant contextual information.
We propose an approach to integrate contextual information as text.
arXiv Detail & Related papers (2022-11-03T15:04:29Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Phrase-Based Affordance Detection via Cyclic Bilateral Interaction [17.022853987801877]
We explore to perceive affordance from a vision-language perspective and consider the challenging phrase-based affordance detection problem.
We propose a cyclic bilateral consistency enhancement network (CBCE-Net) to align language and vision features progressively.
Specifically, the presented CBCE-Net consists of a mutual guided vision-language module that updates the common features of vision and language in a progressive manner, and a cyclic interaction module (CIM) that facilitates the perception of possible interaction with objects in a cyclic manner.
arXiv Detail & Related papers (2022-02-24T13:02:27Z) - Domain Adaptive Semantic Segmentation with Self-Supervised Depth
Estimation [84.34227665232281]
Domain adaptation for semantic segmentation aims to improve the model performance in the presence of a distribution shift between source and target domain.
We leverage the guidance from self-supervised depth estimation, which is available on both domains, to bridge the domain gap.
We demonstrate the effectiveness of our proposed approach on the benchmark tasks SYNTHIA-to-Cityscapes and GTA-to-Cityscapes.
arXiv Detail & Related papers (2021-04-28T07:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.