Explainable Sentence-Level Sentiment Analysis for Amazon Product Reviews
- URL: http://arxiv.org/abs/2111.06070v1
- Date: Thu, 11 Nov 2021 06:35:42 GMT
- Title: Explainable Sentence-Level Sentiment Analysis for Amazon Product Reviews
- Authors: Xuechun Li, Xueyao Sun, Zewei Xu, Yifan Zhou
- Abstract summary: We consider the attention weights distribution of single sentence and the attention weights of main aspect terms.
We find that the aspect terms have the same or even more attention weights than the sentimental words in sentences.
- Score: 2.599882743586164
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we conduct a sentence level sentiment analysis on the product
reviews from Amazon and thorough analysis on the model interpretability. For
the sentiment analysis task, we use the BiLSTM model with attention mechanism.
For the study of interpretability, we consider the attention weights
distribution of single sentence and the attention weights of main aspect terms.
The model has an accuracy of up to 0.96. And we find that the aspect terms have
the same or even more attention weights than the sentimental words in
sentences.
Related papers
- On the Causal Nature of Sentiment Analysis [98.44087655454244]
Sentiment analysis (SA) aims to identify the sentiment expressed in a text, such as a product review.
This paper formulates SA as a combination of two tasks.
For the prediction task, we use the discovered causal mechanisms behind the samples to improve the performance of LLMs.
arXiv Detail & Related papers (2024-04-17T04:04:34Z) - Sentiment analysis and opinion mining on E-commerce site [0.0]
The goal of this study is to solve the sentiment polarity classification challenges in sentiment analysis.
A broad technique for categorizing sentiment opposition is presented, along with comprehensive process explanations.
arXiv Detail & Related papers (2022-11-28T16:43:33Z) - Causal Intervention Improves Implicit Sentiment Analysis [67.43379729099121]
We propose a causal intervention model for Implicit Sentiment Analysis using Instrumental Variable (ISAIV)
We first review sentiment analysis from a causal perspective and analyze the confounders existing in this task.
Then, we introduce an instrumental variable to eliminate the confounding causal effects, thus extracting the pure causal effect between sentence and sentiment.
arXiv Detail & Related papers (2022-08-19T13:17:57Z) - SBERT studies Meaning Representations: Decomposing Sentence Embeddings
into Explainable AMR Meaning Features [22.8438857884398]
We create similarity metrics that are highly effective, while also providing an interpretable rationale for their rating.
Our approach works in two steps: We first select AMR graph metrics that measure meaning similarity of sentences with respect to key semantic facets.
Second, we employ these metrics to induce Semantically Structured Sentence BERT embeddings, which are composed of different meaning aspects captured in different sub-spaces.
arXiv Detail & Related papers (2022-06-14T17:37:18Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Is Sparse Attention more Interpretable? [52.85910570651047]
We investigate how sparsity affects our ability to use attention as an explainability tool.
We find that only a weak relationship between inputs and co-indexed intermediate representations exists -- under sparse attention.
We observe in this setting that inducing sparsity may make it less plausible that attention can be used as a tool for understanding model behavior.
arXiv Detail & Related papers (2021-06-02T11:42:56Z) - Do Context-Aware Translation Models Pay the Right Attention? [61.25804242929533]
Context-aware machine translation models are designed to leverage contextual information, but often fail to do so.
In this paper, we ask several questions: What contexts do human translators use to resolve ambiguous words?
We introduce SCAT (Supporting Context for Ambiguous Translations), a new English-French dataset comprising supporting context words for 14K translations.
Using SCAT, we perform an in-depth analysis of the context used to disambiguate, examining positional and lexical characteristics of the supporting words.
arXiv Detail & Related papers (2021-05-14T17:32:24Z) - Enhanced Aspect-Based Sentiment Analysis Models with Progressive
Self-supervised Attention Learning [103.0064298630794]
In aspect-based sentiment analysis (ABSA), many neural models are equipped with an attention mechanism to quantify the contribution of each context word to sentiment prediction.
We propose a progressive self-supervised attention learning approach for attentional ABSA models.
We integrate the proposed approach into three state-of-the-art neural ABSA models.
arXiv Detail & Related papers (2021-03-05T02:50:05Z) - Structured Self-Attention Weights Encode Semantics in Sentiment Analysis [13.474141732019099]
We show that self-attention scores encode semantics by considering sentiment analysis tasks.
We propose a simple and effective Attention Tracing (LAT) method to analyze structured attention weights.
Our results show that structured attention weights encode rich semantics in sentiment analysis, and match human interpretations of semantics.
arXiv Detail & Related papers (2020-10-10T06:49:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.