GRACE: Gradient Harmonized and Cascaded Labeling for Aspect-based
Sentiment Analysis
- URL: http://arxiv.org/abs/2009.10557v2
- Date: Fri, 25 Sep 2020 03:19:54 GMT
- Title: GRACE: Gradient Harmonized and Cascaded Labeling for Aspect-based
Sentiment Analysis
- Authors: Huaishao Luo, Lei Ji, Tianrui Li, Nan Duan, Daxin Jiang
- Abstract summary: We propose a GRadient hArmonized and CascadEd labeling model (GRACE) to solve these problems.
The proposed model achieves consistency improvement on multiple benchmark datasets and generates state-of-the-art results.
- Score: 90.43089622630258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on the imbalance issue, which is rarely studied in
aspect term extraction and aspect sentiment classification when regarding them
as sequence labeling tasks. Besides, previous works usually ignore the
interaction between aspect terms when labeling polarities. We propose a
GRadient hArmonized and CascadEd labeling model (GRACE) to solve these
problems. Specifically, a cascaded labeling module is developed to enhance the
interchange between aspect terms and improve the attention of sentiment tokens
when labeling sentiment polarities. The polarities sequence is designed to
depend on the generated aspect terms labels. To alleviate the imbalance issue,
we extend the gradient harmonized mechanism used in object detection to the
aspect-based sentiment analysis by adjusting the weight of each label
dynamically. The proposed GRACE adopts a post-pretraining BERT as its backbone.
Experimental results demonstrate that the proposed model achieves consistency
improvement on multiple benchmark datasets and generates state-of-the-art
results.
Related papers
- Adaptive Collaborative Correlation Learning-based Semi-Supervised Multi-Label Feature Selection [25.195711274756334]
We propose an Adaptive Collaborative Correlation lEarning-based Semi-Supervised Multi-label Feature Selection (Access-MFS) method to address these issues.
Specifically, a generalized regression model equipped with an extended uncorrelated constraint is introduced to select discriminative yet irrelevant features.
The correlation instance and label correlation are integrated into the proposed regression model to adaptively learn both the sample similarity graph and the label similarity graph.
arXiv Detail & Related papers (2024-06-18T01:47:38Z) - STRAP: Structured Object Affordance Segmentation with Point Supervision [20.56373848741831]
We study affordance segmentation with point supervision, wherein the setting inherits an unexplored dual affinity-spatial affinity and label affinity.
We devise a dense prediction network that enhances label relations by effectively densifying labels in a new domain.
In experiments, we benchmark our method on the challenging CAD120 dataset, showing significant performance gains over prior methods.
arXiv Detail & Related papers (2023-04-17T17:59:49Z) - Exploiting Completeness and Uncertainty of Pseudo Labels for Weakly
Supervised Video Anomaly Detection [149.23913018423022]
Weakly supervised video anomaly detection aims to identify abnormal events in videos using only video-level labels.
Two-stage self-training methods have achieved significant improvements by self-generating pseudo labels.
We propose an enhancement framework by exploiting completeness and uncertainty properties for effective self-training.
arXiv Detail & Related papers (2022-12-08T05:53:53Z) - Category-Adaptive Label Discovery and Noise Rejection for Multi-label
Image Recognition with Partial Positive Labels [78.88007892742438]
Training multi-label models with partial positive labels (MLR-PPL) attracts increasing attention.
Previous works regard unknown labels as negative and adopt traditional MLR algorithms.
We propose to explore semantic correlation among different images to facilitate the MLR-PPL task.
arXiv Detail & Related papers (2022-11-15T02:11:20Z) - Uncertain Label Correction via Auxiliary Action Unit Graphs for Facial
Expression Recognition [46.99756911719854]
We achieve uncertain label correction of facial expressions using auxiliary action unit (AU) graphs, called ULC-AG.
Experiments show that our ULC-AG achieves 89.31% and 61.57% accuracy on RAF-DB and AffectNet datasets, respectively.
arXiv Detail & Related papers (2022-04-23T11:09:43Z) - Effective Token Graph Modeling using a Novel Labeling Strategy for
Structured Sentiment Analysis [39.770652220521384]
State-of-the-art model for structured sentiment analysis casts the task as a dependency parsing problem.
Label proportions for span prediction and span relation prediction are imbalanced.
Two nodes in a dependency graph cannot have multiple arcs, therefore some overlapped sentiments cannot be recognized.
arXiv Detail & Related papers (2022-03-21T08:23:03Z) - Label Noise in Adversarial Training: A Novel Perspective to Study Robust
Overfitting [45.58217741522973]
We show that label noise exists in adversarial training.
Such label noise is due to the mismatch between the true label distribution of adversarial examples and the label inherited from clean examples.
We propose a method to automatically calibrate the label to address the label noise and robust overfitting.
arXiv Detail & Related papers (2021-10-07T01:15:06Z) - MatchGAN: A Self-Supervised Semi-Supervised Conditional Generative
Adversarial Network [51.84251358009803]
We present a novel self-supervised learning approach for conditional generative adversarial networks (GANs) under a semi-supervised setting.
We perform augmentation by randomly sampling sensible labels from the label space of the few labelled examples available.
Our method surpasses the baseline with only 20% of the labelled examples used to train the baseline.
arXiv Detail & Related papers (2020-06-11T17:14:55Z) - Few-shot Slot Tagging with Collapsed Dependency Transfer and
Label-enhanced Task-adaptive Projection Network [61.94394163309688]
We propose a Label-enhanced Task-Adaptive Projection Network (L-TapNet) based on the state-of-the-art few-shot classification model -- TapNet.
Experimental results show that our model significantly outperforms the strongest few-shot learning baseline by 14.64 F1 scores in the one-shot setting.
arXiv Detail & Related papers (2020-06-10T07:50:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.