Improving Aspect-based Sentiment Analysis with Gated Graph Convolutional
Networks and Syntax-based Regulation
- URL: http://arxiv.org/abs/2010.13389v1
- Date: Mon, 26 Oct 2020 07:36:24 GMT
- Title: Improving Aspect-based Sentiment Analysis with Gated Graph Convolutional
Networks and Syntax-based Regulation
- Authors: Amir Pouran Ben Veyseh, Nasim Nour, Franck Dernoncourt, Quan Hung
Tran, Dejing Dou, Thien Huu Nguyen
- Abstract summary: Aspect-based Sentiment Analysis (ABSA) seeks to predict the sentiment polarity of a sentence toward a specific aspect.
dependency trees can be integrated into deep learning models to produce the state-of-the-art performance for ABSA.
We propose a novel graph-based deep learning model to overcome these two issues.
- Score: 89.38054401427173
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Aspect-based Sentiment Analysis (ABSA) seeks to predict the sentiment
polarity of a sentence toward a specific aspect. Recently, it has been shown
that dependency trees can be integrated into deep learning models to produce
the state-of-the-art performance for ABSA. However, these models tend to
compute the hidden/representation vectors without considering the aspect terms
and fail to benefit from the overall contextual importance scores of the words
that can be obtained from the dependency tree for ABSA. In this work, we
propose a novel graph-based deep learning model to overcome these two issues of
the prior work on ABSA. In our model, gate vectors are generated from the
representation vectors of the aspect terms to customize the hidden vectors of
the graph-based models toward the aspect terms. In addition, we propose a
mechanism to obtain the importance scores for each word in the sentences based
on the dependency trees that are then injected into the model to improve the
representation vectors for ABSA. The proposed model achieves the
state-of-the-art performance on three benchmark datasets.
Related papers
- Entity-Aware Biaffine Attention Model for Improved Constituent Parsing with Reduced Entity Violations [0.0]
We propose an entity-aware biaffine attention model for constituent parsing.
This model incorporates entity information into the biaffine attention mechanism by using additional entity role vectors for potential phrases.
We introduce a new metric, the Entity Violating Rate (EVR), to quantify the extent of entity violations in parsing results.
arXiv Detail & Related papers (2024-09-01T05:59:54Z) - Occlusion Handling in 3D Human Pose Estimation with Perturbed Positional Encoding [15.834419910916933]
We propose a novel positional encoding technique, PerturbPE, that extracts consistent and regular components from the eigenbasis.
Our results support our theoretical findings, e.g. our experimental analysis observed a performance enhancement of up to $12%$ on the Human3.6M dataset.
Our novel approach significantly enhances performance in scenarios where two edges are missing, setting a new benchmark for state-of-the-art.
arXiv Detail & Related papers (2024-05-27T17:48:54Z) - RDGCN: Reinforced Dependency Graph Convolutional Network for
Aspect-based Sentiment Analysis [43.715099882489376]
We propose a new reinforced dependency graph convolutional network (RDGCN) that improves the importance calculation of dependencies in both distance and type views.
Under the criterion, we design a distance-importance function that leverages reinforcement learning for weight distribution search and dissimilarity control.
Comprehensive experiments on three popular datasets demonstrate the effectiveness of the criterion and importance functions.
arXiv Detail & Related papers (2023-11-08T05:37:49Z) - Relational Prior Knowledge Graphs for Detection and Instance
Segmentation [24.360473253478112]
We propose a graph that enhances object features using priors.
Experimental evaluations on COCO show that the utilization of scene graphs, augmented with relational priors, offer benefits for object detection and instance segmentation.
arXiv Detail & Related papers (2023-10-11T15:15:05Z) - Graph-level Representation Learning with Joint-Embedding Predictive Architectures [43.89120279424267]
Joint-Embedding Predictive Architectures (JEPAs) have emerged as a novel and powerful technique for self-supervised representation learning.
We show that graph-level representations can be effectively modeled using this paradigm by proposing a Graph Joint-Embedding Predictive Architecture (Graph-JEPA)
In particular, we employ masked modeling and focus on predicting the latent representations of masked subgraphs starting from the latent representation of a context subgraph.
arXiv Detail & Related papers (2023-09-27T20:42:02Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Unified Graph Structured Models for Video Understanding [93.72081456202672]
We propose a message passing graph neural network that explicitly models relational-temporal relations.
We show how our method is able to more effectively model relationships between relevant entities in the scene.
arXiv Detail & Related papers (2021-03-29T14:37:35Z) - Enhanced Aspect-Based Sentiment Analysis Models with Progressive
Self-supervised Attention Learning [103.0064298630794]
In aspect-based sentiment analysis (ABSA), many neural models are equipped with an attention mechanism to quantify the contribution of each context word to sentiment prediction.
We propose a progressive self-supervised attention learning approach for attentional ABSA models.
We integrate the proposed approach into three state-of-the-art neural ABSA models.
arXiv Detail & Related papers (2021-03-05T02:50:05Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.