Aspect-specific Context Modeling for Aspect-based Sentiment Analysis
- URL: http://arxiv.org/abs/2207.08099v1
- Date: Sun, 17 Jul 2022 07:22:19 GMT
- Title: Aspect-specific Context Modeling for Aspect-based Sentiment Analysis
- Authors: Fang Ma, Chen Zhang, Bo Zhang, Dawei Song
- Abstract summary: Aspect-based sentiment analysis (ABSA) aims at predicting sentiment polarity (SC) or extracting opinion span (OE) expressed towards a given aspect.
PLMs, pretrained language models (PLMs), have been used as context modeling layers to simplify the feature induction structures and achieve state-of-the-art performance.
We propose three aspect-specific input transformations, namely aspect companion, aspect prompt, and aspect marker. Informed by these transformations, non-intrusive aspect-specific PLMs can be achieved to promote the PLM to pay more attention to the aspect-specific context in a sentence.
- Score: 14.61906865051392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-based sentiment analysis (ABSA) aims at predicting sentiment polarity
(SC) or extracting opinion span (OE) expressed towards a given aspect. Previous
work in ABSA mostly relies on rather complicated aspect-specific feature
induction. Recently, pretrained language models (PLMs), e.g., BERT, have been
used as context modeling layers to simplify the feature induction structures
and achieve state-of-the-art performance. However, such PLM-based context
modeling can be not that aspect-specific. Therefore, a key question is left
under-explored: how the aspect-specific context can be better modeled through
PLMs? To answer the question, we attempt to enhance aspect-specific context
modeling with PLM in a non-intrusive manner. We propose three aspect-specific
input transformations, namely aspect companion, aspect prompt, and aspect
marker. Informed by these transformations, non-intrusive aspect-specific PLMs
can be achieved to promote the PLM to pay more attention to the aspect-specific
context in a sentence. Additionally, we craft an adversarial benchmark for ABSA
(advABSA) to see how aspect-specific modeling can impact model robustness.
Extensive experimental results on standard and adversarial benchmarks for SC
and OE demonstrate the effectiveness and robustness of the proposed method,
yielding new state-of-the-art performance on OE and competitive performance on
SC.
Related papers
- Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation Models [0.8602553195689513]
Entity-Aspect Sentiment Triplet Extraction (EASTE) is a novel Aspect-Based Sentiment Analysis task.
Our research aims to achieve high performance on the EASTE task and investigates the impact of model size, type, and adaptation techniques on task performance.
Ultimately, we provide detailed insights and achieving state-of-the-art results in complex sentiment analysis.
arXiv Detail & Related papers (2024-07-04T16:48:14Z) - A Hybrid Approach To Aspect Based Sentiment Analysis Using Transfer Learning [3.30307212568497]
We propose a hybrid approach for Aspect Based Sentiment Analysis using transfer learning.
The approach focuses on generating weakly-supervised annotations by exploiting the strengths of both large language models (LLM) and traditional syntactic dependencies.
arXiv Detail & Related papers (2024-03-25T23:02:33Z) - Explore In-Context Segmentation via Latent Diffusion Models [132.26274147026854]
latent diffusion model (LDM) is an effective minimalist for in-context segmentation.
We build a new and fair in-context segmentation benchmark that includes both image and video datasets.
arXiv Detail & Related papers (2024-03-14T17:52:31Z) - A Novel Energy based Model Mechanism for Multi-modal Aspect-Based
Sentiment Analysis [85.77557381023617]
We propose a novel framework called DQPSA for multi-modal sentiment analysis.
PDQ module uses the prompt as both a visual query and a language query to extract prompt-aware visual information.
EPE module models the boundaries pairing of the analysis target from the perspective of an Energy-based Model.
arXiv Detail & Related papers (2023-12-13T12:00:46Z) - Syntax-Informed Interactive Model for Comprehensive Aspect-Based
Sentiment Analysis [0.0]
We introduce an innovative model: Syntactic Dependency Enhanced Multi-Task Interaction Architecture (SDEMTIA) for comprehensive ABSA.
Our approach innovatively exploits syntactic knowledge (dependency relations and types) using a specialized Syntactic Dependency Embedded Interactive Network (SDEIN)
We also incorporate a novel and efficient message-passing mechanism within a multi-task learning framework to bolster learning efficacy.
arXiv Detail & Related papers (2023-11-28T16:03:22Z) - Adaptive Contextual Perception: How to Generalize to New Backgrounds and
Ambiguous Objects [75.15563723169234]
We investigate how vision models adaptively use context for out-of-distribution generalization.
We show that models that excel in one setting tend to struggle in the other.
To replicate the generalization abilities of biological vision, computer vision models must have factorized object vs. background representations.
arXiv Detail & Related papers (2023-06-09T15:29:54Z) - Out of Context: A New Clue for Context Modeling of Aspect-based
Sentiment Analysis [54.735400754548635]
ABSA aims to predict the sentiment expressed in a review with respect to a given aspect.
The given aspect should be considered as a new clue out of context in the context modeling process.
We design several aspect-aware context encoders based on different backbones.
arXiv Detail & Related papers (2021-06-21T02:26:03Z) - Hierarchical Prosody Modeling for Non-Autoregressive Speech Synthesis [76.39883780990489]
We analyze the behavior of non-autoregressive TTS models under different prosody-modeling settings.
We propose a hierarchical architecture, in which the prediction of phoneme-level prosody features are conditioned on the word-level prosody features.
arXiv Detail & Related papers (2020-11-12T16:16:41Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.