Generative Aspect-Based Sentiment Analysis with Contrastive Learning and
Expressive Structure
- URL: http://arxiv.org/abs/2211.07743v1
- Date: Mon, 14 Nov 2022 20:47:02 GMT
- Title: Generative Aspect-Based Sentiment Analysis with Contrastive Learning and
Expressive Structure
- Authors: Joseph J. Peper, Lu Wang
- Abstract summary: We introduce GEN-SCL-NAT, which consists of two techniques for improved structured generation for ACOS quadruple extraction.
First, we propose GEN-SCL, a supervised contrastive learning objective that aids quadruple prediction by encouraging the model to produce input representations that are discriminable across key input attributes.
Second, we introduce GEN-NAT, a new structured generation format that better adapts autoregressive encoder-decoder models to extract quadruples in a generative fashion.
- Score: 6.125761583306958
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Generative models have demonstrated impressive results on Aspect-based
Sentiment Analysis (ABSA) tasks, particularly for the emerging task of
extracting Aspect-Category-Opinion-Sentiment (ACOS) quadruples. However, these
models struggle with implicit sentiment expressions, which are commonly
observed in opinionated content such as online reviews. In this work, we
introduce GEN-SCL-NAT, which consists of two techniques for improved structured
generation for ACOS quadruple extraction. First, we propose GEN-SCL, a
supervised contrastive learning objective that aids quadruple prediction by
encouraging the model to produce input representations that are discriminable
across key input attributes, such as sentiment polarity and the existence of
implicit opinions and aspects. Second, we introduce GEN-NAT, a new structured
generation format that better adapts autoregressive encoder-decoder models to
extract quadruples in a generative fashion. Experimental results show that
GEN-SCL-NAT achieves top performance across three ACOS datasets, averaging
1.48% F1 improvement, with a maximum 1.73% increase on the LAPTOP-L1 dataset.
Additionally, we see significant gains on implicit aspect and opinion splits
that have been shown as challenging for existing ACOS approaches.
Related papers
- It Takes Two: Accurate Gait Recognition in the Wild via Cross-granularity Alignment [72.75844404617959]
This paper proposes a novel cross-granularity alignment gait recognition method, named XGait.
To achieve this goal, the XGait first contains two branches of backbone encoders to map the silhouette sequences and the parsing sequences into two latent spaces.
Comprehensive experiments on two large-scale gait datasets show XGait with the Rank-1 accuracy of 80.5% on Gait3D and 88.3% CCPG.
arXiv Detail & Related papers (2024-11-16T08:54:27Z) - Bidirectional Awareness Induction in Autoregressive Seq2Seq Models [47.82947878753809]
Bidirectional Awareness Induction (BAI) is a training method that leverages a subset of elements in the network, the Pivots, to perform bidirectional learning without breaking the autoregressive constraints.
In particular, we observed an increase of up to 2.4 CIDEr in Image-Captioning, 4.96 BLEU in Neural Machine Translation, and 1.16 ROUGE in Text Summarization compared to the respective baselines.
arXiv Detail & Related papers (2024-08-25T23:46:35Z) - Self-Consistent Reasoning-based Aspect-Sentiment Quad Prediction with Extract-Then-Assign Strategy [17.477542644785483]
We propose Self-Consistent Reasoning-based Aspect-sentiment quadruple Prediction (SCRAP)
SCRAP optimize its model to generate reasonings and the corresponding sentiment quadruplets in sequence.
In the end, SCRAP significantly improves the model's ability to handle complex reasoning tasks and correctly predict quadruplets through consistency voting.
arXiv Detail & Related papers (2024-03-01T08:34:02Z) - S^2Former-OR: Single-Stage Bi-Modal Transformer for Scene Graph Generation in OR [50.435592120607815]
Scene graph generation (SGG) of surgical procedures is crucial in enhancing holistically cognitive intelligence in the operating room (OR)
Previous works have primarily relied on multi-stage learning, where the generated semantic scene graphs depend on intermediate processes with pose estimation and object detection.
In this study, we introduce a novel single-stage bi-modal transformer framework for SGG in the OR, termed S2Former-OR.
arXiv Detail & Related papers (2024-02-22T11:40:49Z) - iACOS: Advancing Implicit Sentiment Extraction with Informative and Adaptive Negative Examples [2.0249250133493195]
We propose a new method iACOS for extracting Implicit Aspects with Categories and Opinions with Sentiments.
iACOS appends two implicit tokens at the end of a text to capture the context-aware representation of all tokens including implicit aspects and opinions.
We show that iACOS significantly outperforms other quadruple extraction baselines according to the F1 score on two public benchmark datasets.
arXiv Detail & Related papers (2023-11-07T11:19:06Z) - CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts
For Aspect Sentiment Triplet Extraction [13.077459544929598]
We present a novel pre-training strategy using CONTRastive learning to enhance the ASTE performance.
We also demonstrate the advantage of our proposed technique on other ABSA tasks such as ACOS, TASD, and AESC.
arXiv Detail & Related papers (2023-10-24T07:40:09Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - INTERACTION: A Generative XAI Framework for Natural Language Inference
Explanations [58.062003028768636]
Current XAI approaches only focus on delivering a single explanation.
This paper proposes a generative XAI framework, INTERACTION (explaIn aNd predicT thEn queRy with contextuAl CondiTional varIational autO-eNcoder)
Our novel framework presents explanation in two steps: (step one) Explanation and Label Prediction; and (step two) Diverse Evidence Generation.
arXiv Detail & Related papers (2022-09-02T13:52:39Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.