Adversarial Training for Aspect-Based Sentiment Analysis with BERT
- URL: http://arxiv.org/abs/2001.11316v4
- Date: Fri, 23 Oct 2020 07:39:17 GMT
- Title: Adversarial Training for Aspect-Based Sentiment Analysis with BERT
- Authors: Akbar Karimi, Leonardo Rossi, Andrea Prati
- Abstract summary: We propose a novel architecture called BERT Adrial Training (BAT) to utilize adversarial training in sentiment analysis.
The proposed model outperforms post-trained BERT in both tasks.
To the best of our knowledge, this is the first study on the application of adversarial training in ABSA.
- Score: 3.5493798890908104
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-Based Sentiment Analysis (ABSA) deals with the extraction of
sentiments and their targets. Collecting labeled data for this task in order to
help neural networks generalize better can be laborious and time-consuming. As
an alternative, similar data to the real-world examples can be produced
artificially through an adversarial process which is carried out in the
embedding space. Although these examples are not real sentences, they have been
shown to act as a regularization method which can make neural networks more
robust. In this work, we apply adversarial training, which was put forward by
Goodfellow et al. (2014), to the post-trained BERT (BERT-PT) language model
proposed by Xu et al. (2019) on the two major tasks of Aspect Extraction and
Aspect Sentiment Classification in sentiment analysis. After improving the
results of post-trained BERT by an ablation study, we propose a novel
architecture called BERT Adversarial Training (BAT) to utilize adversarial
training in ABSA. The proposed model outperforms post-trained BERT in both
tasks. To the best of our knowledge, this is the first study on the application
of adversarial training in ABSA.
Related papers
- BoostAdapter: Improving Vision-Language Test-Time Adaptation via Regional Bootstrapping [64.8477128397529]
We propose a training-required and training-free test-time adaptation framework.
We maintain a light-weight key-value memory for feature retrieval from instance-agnostic historical samples and instance-aware boosting samples.
We theoretically justify the rationality behind our method and empirically verify its effectiveness on both the out-of-distribution and the cross-domain datasets.
arXiv Detail & Related papers (2024-10-20T15:58:43Z) - Zero-shot prompt-based classification: topic labeling in times of foundation models in German Tweets [1.734165485480267]
We propose a new tool for automatically annotating text using written guidelines without providing training samples.
Our results show that the prompt-based approach is comparable with the fine-tuned BERT but without any annotated training data.
Our findings emphasize the ongoing paradigm shift in the NLP landscape, i.e., the unification of downstream tasks and elimination of the need for pre-labeled training data.
arXiv Detail & Related papers (2024-06-26T10:44:02Z) - Instruction Tuning with Retrieval-based Examples Ranking for Aspect-based Sentiment Analysis [7.458853474864602]
Aspect-based sentiment analysis (ABSA) identifies sentiment information related to specific aspects and provides deeper market insights to businesses and organizations.
Recent studies have proposed using fixed examples for instruction tuning to reformulate ABSA as a generation task.
This study proposes an instruction learning method with retrieval-based example ranking for ABSA tasks.
arXiv Detail & Related papers (2024-05-28T10:39:10Z) - Unsupervised Dense Retrieval with Relevance-Aware Contrastive
Pre-Training [81.3781338418574]
We propose relevance-aware contrastive learning.
We consistently improve the SOTA unsupervised Contriever model on the BEIR and open-domain QA retrieval benchmarks.
Our method can not only beat BM25 after further pre-training on the target corpus but also serves as a good few-shot learner.
arXiv Detail & Related papers (2023-06-05T18:20:27Z) - RLIP: Relational Language-Image Pre-training for Human-Object
Interaction Detection [32.20132357830726]
Language-Image Pre-training (LIPR) is a strategy for contrastive pre-training that leverages both entity and relation descriptions.
We show the benefits of these contributions, collectively termed RLIP-ParSe, for improved zero-shot, few-shot and fine-tuning HOI detection as well as increased robustness from noisy annotations.
arXiv Detail & Related papers (2022-09-05T07:50:54Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - Understanding Pre-trained BERT for Aspect-based Sentiment Analysis [71.40586258509394]
This paper analyzes the pre-trained hidden representations learned from reviews on BERT for tasks in aspect-based sentiment analysis (ABSA)
It is not clear how the general proxy task of (masked) language model trained on unlabeled corpus without annotations of aspects or opinions can provide important features for downstream tasks in ABSA.
arXiv Detail & Related papers (2020-10-31T02:21:43Z) - Improving BERT Performance for Aspect-Based Sentiment Analysis [3.5493798890908104]
Aspect-Based Sentiment Analysis (ABSA) studies the consumer opinion on the market products.
It involves examining the type of sentiments as well as sentiment targets expressed in product reviews.
We show that applying the proposed models eliminates the need for further training of the BERT model.
arXiv Detail & Related papers (2020-10-22T13:52:18Z) - Improving BERT Fine-Tuning via Self-Ensemble and Self-Distillation [84.64004917951547]
Fine-tuning pre-trained language models like BERT has become an effective way in NLP.
In this paper, we improve the fine-tuning of BERT with two effective mechanisms: self-ensemble and self-distillation.
arXiv Detail & Related papers (2020-02-24T16:17:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.