Attention Transfer Network for Aspect-level Sentiment Classification
- URL: http://arxiv.org/abs/2010.12156v1
- Date: Fri, 23 Oct 2020 04:26:33 GMT
- Title: Attention Transfer Network for Aspect-level Sentiment Classification
- Authors: Fei Zhao, Zhen Wu, Xinyu Dai
- Abstract summary: Aspect-level sentiment classification (ASC) aims to detect the sentiment polarity of a given opinion target in a sentence.
Data scarcity causes the attention mechanism sometimes to fail to focus on the corresponding sentiment words of the target.
We propose a novel Attention Transfer Network (ATN) in this paper, which can successfully exploit attention knowledge from document-level sentiment classification datasets.
- Score: 30.704053194980528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-level sentiment classification (ASC) aims to detect the sentiment
polarity of a given opinion target in a sentence. In neural network-based
methods for ASC, most works employ the attention mechanism to capture the
corresponding sentiment words of the opinion target, then aggregate them as
evidence to infer the sentiment of the target. However, aspect-level datasets
are all relatively small-scale due to the complexity of annotation. Data
scarcity causes the attention mechanism sometimes to fail to focus on the
corresponding sentiment words of the target, which finally weakens the
performance of neural models. To address the issue, we propose a novel
Attention Transfer Network (ATN) in this paper, which can successfully exploit
attention knowledge from resource-rich document-level sentiment classification
datasets to improve the attention capability of the aspect-level sentiment
classification task. In the ATN model, we design two different methods to
transfer attention knowledge and conduct experiments on two ASC benchmark
datasets. Extensive experimental results show that our methods consistently
outperform state-of-the-art works. Further analysis also validates the
effectiveness of ATN.
Related papers
- Supervised Gradual Machine Learning for Aspect Category Detection [0.9857683394266679]
Aspect Category Detection (ACD) aims to identify implicit and explicit aspects in a given review sentence.
We propose a novel approach to tackle the ACD task by combining Deep Neural Networks (DNNs) with Gradual Machine Learning (GML) in a supervised setting.
arXiv Detail & Related papers (2024-04-08T07:21:46Z) - Regularization Through Simultaneous Learning: A Case Study on Plant
Classification [0.0]
This paper introduces Simultaneous Learning, a regularization approach drawing on principles of Transfer Learning and Multi-task Learning.
We leverage auxiliary datasets with the target dataset, the UFOP-HVD, to facilitate simultaneous classification guided by a customized loss function.
Remarkably, our approach demonstrates superior performance over models without regularization.
arXiv Detail & Related papers (2023-05-22T19:44:57Z) - The Overlooked Classifier in Human-Object Interaction Recognition [82.20671129356037]
We encode the semantic correlation among classes into the classification head by initializing the weights with language embeddings of HOIs.
We propose a new loss named LSE-Sign to enhance multi-label learning on a long-tailed dataset.
Our simple yet effective method enables detection-free HOI classification, outperforming the state-of-the-arts that require object detection and human pose by a clear margin.
arXiv Detail & Related papers (2022-03-10T23:35:00Z) - Boosting the Generalization Capability in Cross-Domain Few-shot Learning
via Noise-enhanced Supervised Autoencoder [23.860842627883187]
We teach the model to capture broader variations of the feature distributions with a novel noise-enhanced supervised autoencoder (NSAE)
NSAE trains the model by jointly reconstructing inputs and predicting the labels of inputs as well as their reconstructed pairs.
We also take advantage of NSAE structure and propose a two-step fine-tuning procedure that achieves better adaption and improves classification performance in the target domain.
arXiv Detail & Related papers (2021-08-11T04:45:56Z) - How Knowledge Graph and Attention Help? A Quantitative Analysis into
Bag-level Relation Extraction [66.09605613944201]
We quantitatively evaluate the effect of attention and Knowledge Graph on bag-level relation extraction (RE)
We find that (1) higher attention accuracy may lead to worse performance as it may harm the model's ability to extract entity mention features; (2) the performance of attention is largely influenced by various noise distribution patterns; and (3) KG-enhanced attention indeed improves RE performance, while not through enhanced attention but by incorporating entity prior.
arXiv Detail & Related papers (2021-07-26T09:38:28Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Opinion Transmission Network for Jointly Improving Aspect-oriented
Opinion Words Extraction and Sentiment Classification [56.893393134328996]
Aspect-level sentiment classification (ALSC) and aspect oriented opinion words extraction (AOWE) are two highly relevant aspect-based sentiment analysis subtasks.
We propose a novel joint model, Opinion Transmission Network (OTN), to exploit the potential bridge between ALSC and AOWE.
arXiv Detail & Related papers (2020-11-01T11:00:19Z) - Deep Reinforced Attention Learning for Quality-Aware Visual Recognition [73.15276998621582]
We build upon the weakly-supervised generation mechanism of intermediate attention maps in any convolutional neural networks.
We introduce a meta critic network to evaluate the quality of attention maps in the main network.
arXiv Detail & Related papers (2020-07-13T02:44:38Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z) - Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification [11.873435088539459]
Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
arXiv Detail & Related papers (2020-03-03T02:34:44Z) - Investigating Typed Syntactic Dependencies for Targeted Sentiment
Classification Using Graph Attention Neural Network [10.489983726592303]
We investigate a novel relational graph attention network that integrates typed syntactic dependency information.
Results show that our method can effectively leverage label information for improving targeted sentiment classification performances.
arXiv Detail & Related papers (2020-02-22T11:17:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.