Auto-ABSA: Automatic Detection of Aspects in Aspect-Based Sentiment
Analysis
- URL: http://arxiv.org/abs/2202.00484v1
- Date: Wed, 5 Jan 2022 04:23:29 GMT
- Title: Auto-ABSA: Automatic Detection of Aspects in Aspect-Based Sentiment
Analysis
- Authors: Teng Wang
- Abstract summary: We proposed a method that uses an auxiliary sentence about aspects that the sentence contains to help sentiment prediction.
The first is aspect detection, which uses a multi-aspects detection model to predict all aspects that the sentence has.
The second is to do out-of-domain aspect-based sentiment analysis(ABSA), train sentiment classification model with one kind of dataset and validate it with another kind of dataset.
- Score: 2.6944907189507323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: After transformer is proposed, lots of pre-trained language models have been
come up with and sentiment analysis (SA) task has been improved. In this paper,
we proposed a method that uses an auxiliary sentence about aspects that the
sentence contains to help sentiment prediction. The first is aspect detection,
which uses a multi-aspects detection model to predict all aspects that the
sentence has. Combining the predicted aspects and the original sentence as
Sentiment Analysis (SA) model's input. The second is to do out-of-domain
aspect-based sentiment analysis(ABSA), train sentiment classification model
with one kind of dataset and validate it with another kind of dataset. Finally,
we created two baselines, they use no aspect and all aspects as sentiment
classification model's input, respectively. Compare two baselines performance
to our method, found that our method really makes sense.
Related papers
- Rationalizing Predictions by Adversarial Information Calibration [65.19407304154177]
We train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction.
We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
arXiv Detail & Related papers (2023-01-15T03:13:09Z) - Aspect-Based Sentiment Analysis using Local Context Focus Mechanism with
DeBERTa [23.00810941211685]
Aspect-Based Sentiment Analysis (ABSA) is a fine-grained task in the field of sentiment analysis.
Recent DeBERTa model (Decoding-enhanced BERT with disentangled attention) to solve Aspect-Based Sentiment Analysis problem.
arXiv Detail & Related papers (2022-07-06T03:50:31Z) - A Generative Language Model for Few-shot Aspect-Based Sentiment Analysis [90.24921443175514]
We focus on aspect-based sentiment analysis, which involves extracting aspect term, category, and predicting their corresponding polarities.
We propose to reformulate the extraction and prediction tasks into the sequence generation task, using a generative language model with unidirectional attention.
Our approach outperforms the previous state-of-the-art (based on BERT) on average performance by a large margins in few-shot and full-shot settings.
arXiv Detail & Related papers (2022-04-11T18:31:53Z) - A Simple Information-Based Approach to Unsupervised Domain-Adaptive
Aspect-Based Sentiment Analysis [58.124424775536326]
We propose a simple but effective technique based on mutual information to extract their term.
Experiment results show that our proposed method outperforms the state-of-the-art methods for cross-domain ABSA by 4.32% Micro-F1.
arXiv Detail & Related papers (2022-01-29T10:18:07Z) - AES Systems Are Both Overstable And Oversensitive: Explaining Why And
Proposing Defenses [66.49753193098356]
We investigate the reason behind the surprising adversarial brittleness of scoring models.
Our results indicate that autoscoring models, despite getting trained as "end-to-end" models, behave like bag-of-words models.
We propose detection-based protection models that can detect oversensitivity and overstability causing samples with high accuracies.
arXiv Detail & Related papers (2021-09-24T03:49:38Z) - Multi-Aspect Sentiment Analysis with Latent Sentiment-Aspect Attribution [7.289918297809611]
We introduce a new framework called the sentiment-aspect attribution module (SAAM)
The framework works by exploiting the correlations between sentence-level embedding features and variations of document-level aspect rating scores.
Experiments on a hotel review dataset and a beer review dataset have shown SAAM can improve sentiment analysis performance.
arXiv Detail & Related papers (2020-12-15T16:34:36Z) - Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis [56.893393134328996]
We propose a novel Transformer-based Multi-aspect Modeling scheme (TMM), which can capture potential relations between multiple aspects and simultaneously detect the sentiment of all aspects in a sentence.
Our method achieves noticeable improvements compared with strong baselines such as BERT and RoBERTa.
arXiv Detail & Related papers (2020-11-01T11:06:31Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - A Position Aware Decay Weighted Network for Aspect based Sentiment
Analysis [3.1473798197405944]
In ABSA, a text can have multiple sentiments depending upon each aspect.
Most of the existing approaches for ATSA, incorporate aspect information through a different subnetwork.
In this paper, we propose a model that leverages the positional information of the aspect.
arXiv Detail & Related papers (2020-05-03T09:22:03Z) - A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep
Contextual Word Embeddings and Hierarchical Attention [4.742874328556818]
We extend the state-of-the-art Hybrid Approach for Aspect-Based Sentiment Analysis (HAABSA) in two directions.
First we replace the non-contextual word embeddings with deep contextual word embeddings in order to better cope with the word semantics in a given text.
Second, we use hierarchical attention by adding an extra attention layer to the HAABSA high-level representations in order to increase the method flexibility in modeling the input data.
arXiv Detail & Related papers (2020-04-18T17:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.