Auto-ABSA: Cross-Domain Aspect Detection and Sentiment Analysis Using Auxiliary Sentences
- URL: http://arxiv.org/abs/2202.00484v3
- Date: Tue, 15 Oct 2024 00:28:02 GMT
- Title: Auto-ABSA: Cross-Domain Aspect Detection and Sentiment Analysis Using Auxiliary Sentences
- Authors: Teng Wang, Bolun Sun, Yijie Tong,
- Abstract summary: We proposed a method that uses an auxiliary sentence about aspects that the sentence contains to help sentiment prediction.
The first is aspect detection, which uses a multi-aspects detection model to predict all aspects that the sentence has.
The second is to do out-of-domain aspect-based sentiment analysis(ABSA), train sentiment classification model with one kind of dataset and validate it with another kind of dataset.
- Score: 1.368483823700914
- License:
- Abstract: After transformer is proposed, lots of pre-trained language models have been come up with and sentiment analysis (SA) task has been improved. In this paper, we proposed a method that uses an auxiliary sentence about aspects that the sentence contains to help sentiment prediction. The first is aspect detection, which uses a multi-aspects detection model to predict all aspects that the sentence has. Combining the predicted aspects and the original sentence as Sentiment Analysis (SA) model's input. The second is to do out-of-domain aspect-based sentiment analysis(ABSA), train sentiment classification model with one kind of dataset and validate it with another kind of dataset. Finally, we created two baselines, they use no aspect and all aspects as sentiment classification model's input, respectively. Compare two baselines performance to our method, found that our method really makes sense.
Related papers
- Bidirectional Generative Framework for Cross-domain Aspect-based
Sentiment Analysis [68.742820522137]
Cross-domain aspect-based sentiment analysis (ABSA) aims to perform various fine-grained sentiment analysis tasks on a target domain by transferring knowledge from a source domain.
We propose a unified bidirectional generative framework to tackle various cross-domain ABSA tasks.
Our framework trains a generative model in both text-to-label and label-to-text directions.
arXiv Detail & Related papers (2023-05-16T15:02:23Z) - Aspect-Based Sentiment Analysis using Local Context Focus Mechanism with
DeBERTa [23.00810941211685]
Aspect-Based Sentiment Analysis (ABSA) is a fine-grained task in the field of sentiment analysis.
Recent DeBERTa model (Decoding-enhanced BERT with disentangled attention) to solve Aspect-Based Sentiment Analysis problem.
arXiv Detail & Related papers (2022-07-06T03:50:31Z) - A Contrastive Cross-Channel Data Augmentation Framework for Aspect-based
Sentiment Analysis [91.83895509731144]
We propose a novel training framework to mitigate the multi-aspect challenge of sentiment analysis.
A source sentence is fed a domain-specific generator to obtain some synthetic sentences.
The generator generates aspect-specific sentences and a Polarity Augmentation (PAC) to generate polarity-inverted sentences.
Our framework can outperform those baselines without any augmentations by about 1% on accuracy and Macro-F1.
arXiv Detail & Related papers (2022-04-16T16:05:58Z) - A Simple Information-Based Approach to Unsupervised Domain-Adaptive
Aspect-Based Sentiment Analysis [58.124424775536326]
We propose a simple but effective technique based on mutual information to extract their term.
Experiment results show that our proposed method outperforms the state-of-the-art methods for cross-domain ABSA by 4.32% Micro-F1.
arXiv Detail & Related papers (2022-01-29T10:18:07Z) - Double Perturbation: On the Robustness of Robustness and Counterfactual
Bias Evaluation [109.06060143938052]
We propose a "double perturbation" framework to uncover model weaknesses beyond the test dataset.
We apply this framework to study two perturbation-based approaches that are used to analyze models' robustness and counterfactual bias in English.
arXiv Detail & Related papers (2021-04-12T06:57:36Z) - Multi-Aspect Sentiment Analysis with Latent Sentiment-Aspect Attribution [7.289918297809611]
We introduce a new framework called the sentiment-aspect attribution module (SAAM)
The framework works by exploiting the correlations between sentence-level embedding features and variations of document-level aspect rating scores.
Experiments on a hotel review dataset and a beer review dataset have shown SAAM can improve sentiment analysis performance.
arXiv Detail & Related papers (2020-12-15T16:34:36Z) - Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis [56.893393134328996]
We propose a novel Transformer-based Multi-aspect Modeling scheme (TMM), which can capture potential relations between multiple aspects and simultaneously detect the sentiment of all aspects in a sentence.
Our method achieves noticeable improvements compared with strong baselines such as BERT and RoBERTa.
arXiv Detail & Related papers (2020-11-01T11:06:31Z) - Weakly-Supervised Aspect-Based Sentiment Analysis via Joint
Aspect-Sentiment Topic Embedding [71.2260967797055]
We propose a weakly-supervised approach for aspect-based sentiment analysis.
We learn sentiment, aspect> joint topic embeddings in the word embedding space.
We then use neural models to generalize the word-level discriminative information.
arXiv Detail & Related papers (2020-10-13T21:33:24Z) - A Position Aware Decay Weighted Network for Aspect based Sentiment
Analysis [3.1473798197405944]
In ABSA, a text can have multiple sentiments depending upon each aspect.
Most of the existing approaches for ATSA, incorporate aspect information through a different subnetwork.
In this paper, we propose a model that leverages the positional information of the aspect.
arXiv Detail & Related papers (2020-05-03T09:22:03Z) - A Hybrid Approach for Aspect-Based Sentiment Analysis Using Deep
Contextual Word Embeddings and Hierarchical Attention [4.742874328556818]
We extend the state-of-the-art Hybrid Approach for Aspect-Based Sentiment Analysis (HAABSA) in two directions.
First we replace the non-contextual word embeddings with deep contextual word embeddings in order to better cope with the word semantics in a given text.
Second, we use hierarchical attention by adding an extra attention layer to the HAABSA high-level representations in order to increase the method flexibility in modeling the input data.
arXiv Detail & Related papers (2020-04-18T17:54:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.