SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis
- URL: http://arxiv.org/abs/2005.05635v2
- Date: Wed, 20 May 2020 08:12:22 GMT
- Title: SKEP: Sentiment Knowledge Enhanced Pre-training for Sentiment Analysis
- Authors: Hao Tian, Can Gao, Xinyan Xiao, Hao Liu, Bolei He, Hua Wu, Haifeng
Wang, Feng Wu
- Abstract summary: We introduce Sentiment Knowledge Enhanced Pre-training (SKEP) in order to learn a unified sentiment representation for multiple sentiment analysis tasks.
With the help of automatically-mined knowledge, SKEP conducts sentiment masking and constructs three sentiment knowledge prediction objectives.
Experiments on three kinds of sentiment tasks show that SKEP significantly outperforms strong pre-training baseline.
- Score: 69.80296394461149
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, sentiment analysis has seen remarkable advance with the help of
pre-training approaches. However, sentiment knowledge, such as sentiment words
and aspect-sentiment pairs, is ignored in the process of pre-training, despite
the fact that they are widely used in traditional sentiment analysis
approaches. In this paper, we introduce Sentiment Knowledge Enhanced
Pre-training (SKEP) in order to learn a unified sentiment representation for
multiple sentiment analysis tasks. With the help of automatically-mined
knowledge, SKEP conducts sentiment masking and constructs three sentiment
knowledge prediction objectives, so as to embed sentiment information at the
word, polarity and aspect level into pre-trained sentiment representation. In
particular, the prediction of aspect-sentiment pairs is converted into
multi-label classification, aiming to capture the dependency between words in a
pair. Experiments on three kinds of sentiment tasks show that SKEP
significantly outperforms strong pre-training baseline, and achieves new
state-of-the-art results on most of the test datasets. We release our code at
https://github.com/baidu/Senta.
Related papers
- You Shall Know a Tool by the Traces it Leaves: The Predictability of Sentiment Analysis Tools [74.98850427240464]
We show that sentiment analysis tools disagree on the same dataset.
We show that the sentiment tool used for sentiment annotation can even be predicted from its outcome.
arXiv Detail & Related papers (2024-10-18T17:27:38Z) - How are Prompts Different in Terms of Sensitivity? [50.67313477651395]
We present a comprehensive prompt analysis based on the sensitivity of a function.
We use gradient-based saliency scores to empirically demonstrate how different prompts affect the relevance of input tokens to the output.
We introduce sensitivity-aware decoding which incorporates sensitivity estimation as a penalty term in the standard greedy decoding.
arXiv Detail & Related papers (2023-11-13T10:52:01Z) - Sentiment-Aware Word and Sentence Level Pre-training for Sentiment
Analysis [64.70116276295609]
SentiWSP is a Sentiment-aware pre-trained language model with combined Word-level and Sentence-level Pre-training tasks.
SentiWSP achieves new state-of-the-art performance on various sentence-level and aspect-level sentiment classification benchmarks.
arXiv Detail & Related papers (2022-10-18T12:25:29Z) - KESA: A Knowledge Enhanced Approach For Sentiment Analysis [13.937274761540925]
We study sentence-level sentiment analysis and propose two sentiment-aware auxiliary tasks named sentiment word cloze and conditional sentiment prediction.
The experimental results demonstrate that our approach consistently outperforms pre-trained models.
arXiv Detail & Related papers (2022-02-24T13:21:27Z) - Learning Implicit Sentiment in Aspect-based Sentiment Analysis with
Supervised Contrastive Pre-Training [18.711698114617526]
We propose Supervised Contrastive Pre-training on large-scale sentiment-annotated corpora.
By aligning the representation of implicit sentiment expressions to those with the same sentiment label, the pre-training process leads to better capture of both implicit and explicit sentiment orientation towards aspects in reviews.
arXiv Detail & Related papers (2021-11-03T13:03:17Z) - SentiPrompt: Sentiment Knowledge Enhanced Prompt-Tuning for Aspect-Based
Sentiment Analysis [22.758661494710047]
We propose SentiPrompt to tune the language model in the unified framework.
We inject sentiment knowledge regarding aspects, opinions, and polarities into prompt and explicitly model term relations.
Our approach can outperform strong baselines on Triplet Extraction, Pair Extraction, and Aspect Term Extraction with Sentiment Classification.
arXiv Detail & Related papers (2021-09-17T01:56:06Z) - Enhanced Aspect-Based Sentiment Analysis Models with Progressive
Self-supervised Attention Learning [103.0064298630794]
In aspect-based sentiment analysis (ABSA), many neural models are equipped with an attention mechanism to quantify the contribution of each context word to sentiment prediction.
We propose a progressive self-supervised attention learning approach for attentional ABSA models.
We integrate the proposed approach into three state-of-the-art neural ABSA models.
arXiv Detail & Related papers (2021-03-05T02:50:05Z) - From Sentiment Annotations to Sentiment Prediction through Discourse
Augmentation [30.615883375573432]
We propose a novel framework to exploit task-related discourse for the task of sentiment analysis.
More specifically, we are combining the large-scale, sentiment-dependent MEGA-DT treebank with a novel neural architecture for sentiment prediction.
Experiments show that our framework using sentiment-related discourse augmentations for sentiment prediction enhances the overall performance for long documents.
arXiv Detail & Related papers (2020-11-05T18:28:13Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.