Incorporating Dynamic Semantics into Pre-Trained Language Model for
Aspect-based Sentiment Analysis
- URL: http://arxiv.org/abs/2203.16369v1
- Date: Wed, 30 Mar 2022 14:48:46 GMT
- Title: Incorporating Dynamic Semantics into Pre-Trained Language Model for
Aspect-based Sentiment Analysis
- Authors: Kai Zhang, Kun Zhang, Mengdi Zhang, Hongke Zhao, Qi Liu, Wei Wu,
Enhong Chen
- Abstract summary: We propose Dynamic Re-weighting BERT (DR-BERT) to learn dynamic aspect-oriented semantics for ABSA.
Specifically, we first take the Stack-BERT layers as a primary encoder to grasp the overall semantic of the sentence.
We then fine-tune it by incorporating a lightweight Dynamic Re-weighting Adapter (DRA)
- Score: 67.41078214475341
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-based sentiment analysis (ABSA) predicts sentiment polarity towards a
specific aspect in the given sentence. While pre-trained language models such
as BERT have achieved great success, incorporating dynamic semantic changes
into ABSA remains challenging. To this end, in this paper, we propose to
address this problem by Dynamic Re-weighting BERT (DR-BERT), a novel method
designed to learn dynamic aspect-oriented semantics for ABSA. Specifically, we
first take the Stack-BERT layers as a primary encoder to grasp the overall
semantic of the sentence and then fine-tune it by incorporating a lightweight
Dynamic Re-weighting Adapter (DRA). Note that the DRA can pay close attention
to a small region of the sentences at each step and re-weigh the vitally
important words for better aspect-aware sentiment understanding. Finally,
experimental results on three benchmark datasets demonstrate the effectiveness
and the rationality of our proposed model and provide good interpretable
insights for future semantic modeling.
Related papers
- Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - Aspect-Based Sentiment Analysis using Local Context Focus Mechanism with
DeBERTa [23.00810941211685]
Aspect-Based Sentiment Analysis (ABSA) is a fine-grained task in the field of sentiment analysis.
Recent DeBERTa model (Decoding-enhanced BERT with disentangled attention) to solve Aspect-Based Sentiment Analysis problem.
arXiv Detail & Related papers (2022-07-06T03:50:31Z) - BERT-ASC: Auxiliary-Sentence Construction for Implicit Aspect Learning in Sentiment Analysis [4.522719296659495]
This paper proposes a unified framework to address aspect categorization and aspect-based sentiment subtasks.
We introduce a mechanism to construct an auxiliary-sentence for the implicit aspect using the corpus's semantic information.
We then encourage BERT to learn aspect-specific representation in response to this auxiliary-sentence, not the aspect itself.
arXiv Detail & Related papers (2022-03-22T13:12:27Z) - SDCUP: Schema Dependency-Enhanced Curriculum Pre-Training for Table
Semantic Parsing [19.779493883522072]
This paper designs two novel pre-training objectives to impose the desired inductive bias into the learned representations for table pre-training.
We propose a schema-aware curriculum learning approach to mitigate the impact of noise and learn effectively from the pre-training data in an easy-to-hard manner.
arXiv Detail & Related papers (2021-11-18T02:51:04Z) - Enhanced Aspect-Based Sentiment Analysis Models with Progressive
Self-supervised Attention Learning [103.0064298630794]
In aspect-based sentiment analysis (ABSA), many neural models are equipped with an attention mechanism to quantify the contribution of each context word to sentiment prediction.
We propose a progressive self-supervised attention learning approach for attentional ABSA models.
We integrate the proposed approach into three state-of-the-art neural ABSA models.
arXiv Detail & Related papers (2021-03-05T02:50:05Z) - Introducing Syntactic Structures into Target Opinion Word Extraction
with Deep Learning [89.64620296557177]
We propose to incorporate the syntactic structures of the sentences into the deep learning models for targeted opinion word extraction.
We also introduce a novel regularization technique to improve the performance of the deep learning models.
The proposed model is extensively analyzed and achieves the state-of-the-art performance on four benchmark datasets.
arXiv Detail & Related papers (2020-10-26T07:13:17Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.