Opinion Tree Parsing for Aspect-based Sentiment Analysis
- URL: http://arxiv.org/abs/2306.08925v1
- Date: Thu, 15 Jun 2023 07:53:14 GMT
- Title: Opinion Tree Parsing for Aspect-based Sentiment Analysis
- Authors: Xiaoyi Bao, Xiaotong Jiang, Zhongqing Wang, Yue Zhang, and Guodong
Zhou
- Abstract summary: We propose an opinion tree parsing model, aiming to parse all the sentiment elements from an opinion tree, which is much faster and can explicitly reveal a more comprehensive and complete aspect-level sentiment structure.
In particular, we first introduce context-free opinion grammar to normalize the opinion tree structure. We then employ a neural chart-based opinion tree to fully explore the correlations among sentiment elements and parse them into an opinion tree structure.
- Score: 24.29073390167775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Extracting sentiment elements using pre-trained generative models has
recently led to large improvements in aspect-based sentiment analysis
benchmarks. However, these models always need large-scale computing resources,
and they also ignore explicit modeling of structure between sentiment elements.
To address these challenges, we propose an opinion tree parsing model, aiming
to parse all the sentiment elements from an opinion tree, which is much faster,
and can explicitly reveal a more comprehensive and complete aspect-level
sentiment structure. In particular, we first introduce a novel context-free
opinion grammar to normalize the opinion tree structure. We then employ a
neural chart-based opinion tree parser to fully explore the correlations among
sentiment elements and parse them into an opinion tree structure. Extensive
experiments show the superiority of our proposed model and the capacity of the
opinion tree parser with the proposed context-free opinion grammar. More
importantly, the results also prove that our model is much faster than previous
models.
Related papers
- Inherently Interpretable Tree Ensemble Learning [7.868733904112288]
We show that when shallow decision trees are used as base learners, the ensemble learning algorithms can become inherently interpretable.
An interpretation algorithm is developed that converts the tree ensemble into the functional ANOVA representation with inherent interpretability.
Experiments on simulations and real-world datasets show that our proposed methods offer a better trade-off between model interpretation and predictive performance.
arXiv Detail & Related papers (2024-10-24T18:58:41Z) - Entailment Tree Explanations via Iterative Retrieval-Generation Reasoner [56.08919422452905]
We propose an architecture called Iterative Retrieval-Generation Reasoner (IRGR)
Our model is able to explain a given hypothesis by systematically generating a step-by-step explanation from textual premises.
We outperform existing benchmarks on premise retrieval and entailment tree generation, with around 300% gain in overall correctness.
arXiv Detail & Related papers (2022-05-18T21:52:11Z) - Compositional Generalization Requires Compositional Parsers [69.77216620997305]
We compare sequence-to-sequence models and models guided by compositional principles on the recent COGS corpus.
We show structural generalization is a key measure of compositional generalization and requires models that are aware of complex structure.
arXiv Detail & Related papers (2022-02-24T07:36:35Z) - To be Closer: Learning to Link up Aspects with Opinions [18.956990787407793]
Dependency parse trees are helpful for discovering the opinion words in aspect-based sentiment analysis (ABSA)
In this work, we aim to shorten the distance between aspects and corresponding opinion words by learning an aspect-centric tree structure.
The learning process allows the tree structure to adaptively correlate the aspect and opinion words, enabling us to better identify the polarity in the ABSA task.
arXiv Detail & Related papers (2021-09-17T07:37:13Z) - Learning compositional structures for semantic graph parsing [81.41592892863979]
We show how AM dependency parsing can be trained directly on a neural latent-variable model.
Our model picks up on several linguistic phenomena on its own and achieves comparable accuracy to supervised training.
arXiv Detail & Related papers (2021-06-08T14:20:07Z) - Unsupervised Learning of Explainable Parse Trees for Improved
Generalisation [15.576061447736057]
We propose an attention mechanism over Tree-LSTMs to learn more meaningful and explainable parse tree structures.
We also demonstrate the superior performance of our proposed model on natural language inference, semantic relatedness, and sentiment analysis tasks.
arXiv Detail & Related papers (2021-04-11T12:10:03Z) - From Sentiment Annotations to Sentiment Prediction through Discourse
Augmentation [30.615883375573432]
We propose a novel framework to exploit task-related discourse for the task of sentiment analysis.
More specifically, we are combining the large-scale, sentiment-dependent MEGA-DT treebank with a novel neural architecture for sentiment prediction.
Experiments show that our framework using sentiment-related discourse augmentations for sentiment prediction enhances the overall performance for long documents.
arXiv Detail & Related papers (2020-11-05T18:28:13Z) - Improving Aspect-based Sentiment Analysis with Gated Graph Convolutional
Networks and Syntax-based Regulation [89.38054401427173]
Aspect-based Sentiment Analysis (ABSA) seeks to predict the sentiment polarity of a sentence toward a specific aspect.
dependency trees can be integrated into deep learning models to produce the state-of-the-art performance for ABSA.
We propose a novel graph-based deep learning model to overcome these two issues.
arXiv Detail & Related papers (2020-10-26T07:36:24Z) - Span-based Semantic Parsing for Compositional Generalization [53.24255235340056]
SpanBasedSP predicts a span tree over an input utterance, explicitly encoding how partial programs compose over spans in the input.
On GeoQuery, SCAN and CLOSURE, SpanBasedSP performs similarly to strong seq2seq baselines on random splits, but dramatically improves performance compared to baselines on splits that require compositional generalization.
arXiv Detail & Related papers (2020-09-13T16:42:18Z) - Exploiting Syntactic Structure for Better Language Modeling: A Syntactic
Distance Approach [78.77265671634454]
We make use of a multi-task objective, i.e., the models simultaneously predict words as well as ground truth parse trees in a form called "syntactic distances"
Experimental results on the Penn Treebank and Chinese Treebank datasets show that when ground truth parse trees are provided as additional training signals, the model is able to achieve lower perplexity and induce trees with better quality.
arXiv Detail & Related papers (2020-05-12T15:35:00Z) - Relational Graph Attention Network for Aspect-based Sentiment Analysis [35.342467338880546]
Aspect-based sentiment analysis aims to determine the sentiment polarity towards a specific aspect in online reviews.
We propose a relational graph attention network (R-GAT) to encode the new tree structure for sentiment prediction.
Experiments are conducted on the SemEval 2014 and Twitter datasets.
arXiv Detail & Related papers (2020-04-26T12:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.