Out of Context: A New Clue for Context Modeling of Aspect-based
Sentiment Analysis
- URL: http://arxiv.org/abs/2106.10816v1
- Date: Mon, 21 Jun 2021 02:26:03 GMT
- Title: Out of Context: A New Clue for Context Modeling of Aspect-based
Sentiment Analysis
- Authors: Bowen Xing and Ivor W. Tsang
- Abstract summary: ABSA aims to predict the sentiment expressed in a review with respect to a given aspect.
The given aspect should be considered as a new clue out of context in the context modeling process.
We design several aspect-aware context encoders based on different backbones.
- Score: 54.735400754548635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-based sentiment analysis (ABSA) aims to predict the sentiment
expressed in a review with respect to a given aspect. The core of ABSA is to
model the interaction between the context and given aspect to extract the
aspect-related information. In prior work, attention mechanisms and dependency
graph networks are commonly adopted to capture the relations between the
context and given aspect. And the weighted sum of context hidden states is used
as the final representation fed to the classifier. However, the information
related to the given aspect may be already discarded and adverse information
may be retained in the context modeling processes of existing models. This
problem cannot be solved by subsequent modules and there are two reasons:
first, their operations are conducted on the encoder-generated context hidden
states, whose value cannot change after the encoder; second, existing encoders
only consider the context while not the given aspect. To address this problem,
we argue the given aspect should be considered as a new clue out of context in
the context modeling process. As for solutions, we design several aspect-aware
context encoders based on different backbones: an aspect-aware LSTM and three
aspect-aware BERTs. They are dedicated to generate aspect-aware hidden states
which are tailored for ABSA task. In these aspect-aware context encoders, the
semantics of the given aspect is used to regulate the information flow.
Consequently, the aspect-related information can be retained and
aspect-irrelevant information can be excluded in the generated hidden states.
We conduct extensive experiments on several benchmark datasets with empirical
analysis, demonstrating the efficacies and advantages of our proposed
aspect-aware context encoders.
Related papers
- Amplifying Aspect-Sentence Awareness: A Novel Approach for Aspect-Based Sentiment Analysis [2.9045498954705886]
Aspect-Based Sentiment Analysis (ABSA) is increasingly crucial in Natural Language Processing (NLP)
ABSA goes beyond traditional sentiment analysis by extracting sentiments related to specific aspects mentioned in the text.
We propose Amplifying Aspect-Sentence Awareness (A3SN), a novel technique designed to enhance ABSA through amplifying aspect-sentence awareness attention.
arXiv Detail & Related papers (2024-05-14T10:29:59Z) - A Hybrid Approach To Aspect Based Sentiment Analysis Using Transfer Learning [3.30307212568497]
We propose a hybrid approach for Aspect Based Sentiment Analysis using transfer learning.
The approach focuses on generating weakly-supervised annotations by exploiting the strengths of both large language models (LLM) and traditional syntactic dependencies.
arXiv Detail & Related papers (2024-03-25T23:02:33Z) - Let's Rectify Step by Step: Improving Aspect-based Sentiment Analysis
with Diffusion Models [36.482634643246264]
We propose a novel diffusion model tailored for ABSA, which extracts the aspects progressively step by step.
DiffusionABSA gradually adds noise to the aspect terms in the training process, subsequently learning a denoising process that progressively restores these terms in a reverse manner.
To estimate the boundaries, we design a denoising neural network enhanced by a syntax-aware temporal attention mechanism.
arXiv Detail & Related papers (2024-02-23T12:35:43Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - Coherent Entity Disambiguation via Modeling Topic and Categorical
Dependency [87.16283281290053]
Previous entity disambiguation (ED) methods adopt a discriminative paradigm, where prediction is made based on matching scores between mention context and candidate entities.
We propose CoherentED, an ED system equipped with novel designs aimed at enhancing the coherence of entity predictions.
We achieve new state-of-the-art results on popular ED benchmarks, with an average improvement of 1.3 F1 points.
arXiv Detail & Related papers (2023-11-06T16:40:13Z) - Aspect-specific Context Modeling for Aspect-based Sentiment Analysis [14.61906865051392]
Aspect-based sentiment analysis (ABSA) aims at predicting sentiment polarity (SC) or extracting opinion span (OE) expressed towards a given aspect.
PLMs, pretrained language models (PLMs), have been used as context modeling layers to simplify the feature induction structures and achieve state-of-the-art performance.
We propose three aspect-specific input transformations, namely aspect companion, aspect prompt, and aspect marker. Informed by these transformations, non-intrusive aspect-specific PLMs can be achieved to promote the PLM to pay more attention to the aspect-specific context in a sentence.
arXiv Detail & Related papers (2022-07-17T07:22:19Z) - Context-LGM: Leveraging Object-Context Relation for Context-Aware Object
Recognition [48.5398871460388]
We propose a novel Contextual Latent Generative Model (Context-LGM), which considers the object-context relation and models it in a hierarchical manner.
To infer contextual features, we reformulate the objective function of Variational Auto-Encoder (VAE), where contextual features are learned as a posterior conditioned distribution on the object.
The effectiveness of our method is verified by state-of-the-art performance on two context-aware object recognition tasks.
arXiv Detail & Related papers (2021-10-08T11:31:58Z) - Understand me, if you refer to Aspect Knowledge: Knowledge-aware Gated
Recurrent Memory Network [54.735400754548635]
Aspect-level sentiment classification (ASC) aims to predict the fine-grained sentiment polarity towards a given aspect mentioned in a review.
Despite recent advances in ASC, enabling machines to preciously infer aspect sentiments is still challenging.
This paper tackles two challenges in ASC: (1) due to lack of aspect knowledge, aspect representation is inadequate to represent aspect's exact meaning and property information; (2) prior works only capture either local syntactic information or global relational information, thus missing either one of them leads to insufficient syntactic information.
arXiv Detail & Related papers (2021-08-05T03:39:30Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.