Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based
Sentiment Analysis
- URL: http://arxiv.org/abs/2009.07964v4
- Date: Wed, 28 Oct 2020 08:19:36 GMT
- Title: Tasty Burgers, Soggy Fries: Probing Aspect Robustness in Aspect-Based
Sentiment Analysis
- Authors: Xiaoyu Xing, Zhijing Jin, Di Jin, Bingning Wang, Qi Zhang, and
Xuanjing Huang
- Abstract summary: Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards a specific aspect in the text.
Existing ABSA test sets cannot be used to probe whether a model can distinguish the sentiment of the target aspect from the non-target aspects.
We generate new examples to disentangle the confounding sentiments of the non-target aspects from the target aspect's sentiment.
- Score: 71.40390724765903
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Aspect-based sentiment analysis (ABSA) aims to predict the sentiment towards
a specific aspect in the text. However, existing ABSA test sets cannot be used
to probe whether a model can distinguish the sentiment of the target aspect
from the non-target aspects. To solve this problem, we develop a simple but
effective approach to enrich ABSA test sets. Specifically, we generate new
examples to disentangle the confounding sentiments of the non-target aspects
from the target aspect's sentiment. Based on the SemEval 2014 dataset, we
construct the Aspect Robustness Test Set (ARTS) as a comprehensive probe of the
aspect robustness of ABSA models. Over 92% data of ARTS show high fluency and
desired sentiment on all aspects by human evaluation. Using ARTS, we analyze
the robustness of nine ABSA models, and observe, surprisingly, that their
accuracy drops by up to 69.73%. We explore several ways to improve aspect
robustness, and find that adversarial training can improve models' performance
on ARTS by up to 32.85%. Our code and new test set are available at
https://github.com/zhijing-jin/ARTS_TestSet
Related papers
- Llama Scope: Extracting Millions of Features from Llama-3.1-8B with Sparse Autoencoders [115.34050914216665]
Sparse Autoencoders (SAEs) have emerged as a powerful unsupervised method for extracting sparse representations from language models.
We introduce a suite of 256 SAEs, trained on each layer and sublayer of the Llama-3.1-8B-Base model, with 32K and 128K features.
We assess the generalizability of SAEs trained on base models to longer contexts and fine-tuned models.
arXiv Detail & Related papers (2024-10-27T17:33:49Z) - Stanceformer: Target-Aware Transformer for Stance Detection [59.69858080492586]
Stance Detection involves discerning the stance expressed in a text towards a specific subject or target.
Prior works have relied on existing transformer models that lack the capability to prioritize targets effectively.
We introduce Stanceformer, a target-aware transformer model that incorporates enhanced attention towards the targets during both training and inference.
arXiv Detail & Related papers (2024-10-09T17:24:28Z) - Instruct-DeBERTa: A Hybrid Approach for Aspect-based Sentiment Analysis on Textual Reviews [2.0143010051030417]
Aspect-based Sentiment Analysis (ABSA) is a critical task in Natural Language Processing (NLP)
Traditional sentiment analysis methods, while useful for determining overall sentiment, often miss the implicit opinions about particular product or service features.
This paper presents a comprehensive review of the evolution of ABSA methodologies, from lexicon-based approaches to machine learning.
arXiv Detail & Related papers (2024-08-23T16:31:07Z) - Deep Content Understanding Toward Entity and Aspect Target Sentiment Analysis on Foundation Models [0.8602553195689513]
Entity-Aspect Sentiment Triplet Extraction (EASTE) is a novel Aspect-Based Sentiment Analysis task.
Our research aims to achieve high performance on the EASTE task and investigates the impact of model size, type, and adaptation techniques on task performance.
Ultimately, we provide detailed insights and achieving state-of-the-art results in complex sentiment analysis.
arXiv Detail & Related papers (2024-07-04T16:48:14Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - A Weak Supervision Approach for Few-Shot Aspect Based Sentiment [39.33888584498155]
Weak supervision on abundant unlabeled data can be leveraged to improve few-shot performance in sentiment analysis tasks.
We propose a pipeline approach to construct a noisy ABSA dataset, and we use it to adapt a pre-trained sequence-to-sequence model to the ABSA tasks.
Our proposed method preserves the full fine-tuning performance while showing significant improvements (15.84% absolute F1) in the few-shot learning scenario.
arXiv Detail & Related papers (2023-05-19T19:53:54Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - Interventional Aspect-Based Sentiment Analysis [10.974711813144554]
We propose a simple yet effective method, namely, Sentiment Adjustment (SENTA), by applying a backdoor adjustment to disentangle those confounding factors.
Experimental results on the Aspect Robustness Test Set (ARTS) dataset demonstrate that our approach improves the performance while maintaining accuracy in the original test set.
arXiv Detail & Related papers (2021-04-20T07:54:29Z) - Understanding Pre-trained BERT for Aspect-based Sentiment Analysis [71.40586258509394]
This paper analyzes the pre-trained hidden representations learned from reviews on BERT for tasks in aspect-based sentiment analysis (ABSA)
It is not clear how the general proxy task of (masked) language model trained on unlabeled corpus without annotations of aspects or opinions can provide important features for downstream tasks in ABSA.
arXiv Detail & Related papers (2020-10-31T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.