Investigating Typed Syntactic Dependencies for Targeted Sentiment
Classification Using Graph Attention Neural Network
- URL: http://arxiv.org/abs/2002.09685v3
- Date: Thu, 17 Dec 2020 05:29:47 GMT
- Title: Investigating Typed Syntactic Dependencies for Targeted Sentiment
Classification Using Graph Attention Neural Network
- Authors: Xuefeng Bai, Pengbo Liu and Yue Zhang
- Abstract summary: We investigate a novel relational graph attention network that integrates typed syntactic dependency information.
Results show that our method can effectively leverage label information for improving targeted sentiment classification performances.
- Score: 10.489983726592303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Targeted sentiment classification predicts the sentiment polarity on given
target mentions in input texts. Dominant methods employ neural networks for
encoding the input sentence and extracting relations between target mentions
and their contexts. Recently, graph neural network has been investigated for
integrating dependency syntax for the task, achieving the state-of-the-art
results. However, existing methods do not consider dependency label
information, which can be intuitively useful. To solve the problem, we
investigate a novel relational graph attention network that integrates typed
syntactic dependency information. Results on standard benchmarks show that our
method can effectively leverage label information for improving targeted
sentiment classification performances. Our final model significantly
outperforms state-of-the-art syntax-based approaches.
Related papers
- Word and Phrase Features in Graph Convolutional Network for Automatic Question Classification [0.7405975743268344]
We propose a novel approach leveraging graph convolutional networks (GCNs) to better model the inherent structure of questions.
By representing questions as graphs, our method allows GCNs to learn from the interconnected nature of language more effectively.
Our findings demonstrate that GCNs, augmented with phrase-based features, offer a promising solution for more accurate and context-aware question classification.
arXiv Detail & Related papers (2024-09-04T07:13:30Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - On Discprecncies between Perturbation Evaluations of Graph Neural
Network Attributions [49.8110352174327]
We assess attribution methods from a perspective not previously explored in the graph domain: retraining.
The core idea is to retrain the network on important (or not important) relationships as identified by the attributions.
We run our analysis on four state-of-the-art GNN attribution methods and five synthetic and real-world graph classification datasets.
arXiv Detail & Related papers (2024-01-01T02:03:35Z) - Classification of vertices on social networks by multiple approaches [1.370151489527964]
In the case of social networks, it is crucial to evaluate the labels of discrete communities.
For each of these interaction-based entities, a social graph, a mailing dataset, and two citation sets are selected as the testbench repositories.
This paper was not only assessed the most valuable method but also determined how graph neural networks work.
arXiv Detail & Related papers (2023-01-13T09:42:55Z) - Compositional Generalization in Grounded Language Learning via Induced
Model Sparsity [81.38804205212425]
We consider simple language-conditioned navigation problems in a grid world environment with disentangled observations.
We design an agent that encourages sparse correlations between words in the instruction and attributes of objects, composing them together to find the goal.
Our agent maintains a high level of performance on goals containing novel combinations of properties even when learning from a handful of demonstrations.
arXiv Detail & Related papers (2022-07-06T08:46:27Z) - Graph Adaptive Semantic Transfer for Cross-domain Sentiment
Classification [68.06496970320595]
Cross-domain sentiment classification (CDSC) aims to use the transferable semantics learned from the source domain to predict the sentiment of reviews in the unlabeled target domain.
We present Graph Adaptive Semantic Transfer (GAST) model, an adaptive syntactic graph embedding method that is able to learn domain-invariant semantics from both word sequences and syntactic graphs.
arXiv Detail & Related papers (2022-05-18T07:47:01Z) - Learn from Structural Scope: Improving Aspect-Level Sentiment Analysis
with Hybrid Graph Convolutional Networks [6.116341682577877]
Aspect-level sentiment analysis aims to determine the sentiment polarity towards a specific target in a sentence.
We introduce the concept of Scope, which outlines a structural text region related to a specific target.
We propose a hybrid graph convolutional network (HGCN) to synthesize information from constituency tree and dependency tree.
arXiv Detail & Related papers (2022-04-27T09:10:22Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - Be More with Less: Hypergraph Attention Networks for Inductive Text
Classification [56.98218530073927]
Graph neural networks (GNNs) have received increasing attention in the research community and demonstrated their promising results on this canonical task.
Despite the success, their performance could be largely jeopardized in practice since they are unable to capture high-order interaction between words.
We propose a principled model -- hypergraph attention networks (HyperGAT) which can obtain more expressive power with less computational consumption for text representation learning.
arXiv Detail & Related papers (2020-11-01T00:21:59Z) - Semantic Sentiment Analysis Based on Probabilistic Graphical Models and
Recurrent Neural Network [0.0]
The purpose of this study is to investigate the use of semantics to perform sentiment analysis based on probabilistic graphical models and recurrent neural networks.
The datasets used for the experiments were IMDB movie reviews, Amazon Consumer Product reviews, and Twitter Review datasets.
arXiv Detail & Related papers (2020-08-06T11:59:00Z) - Affinity Graph Supervision for Visual Recognition [35.35959846458965]
We propose a principled method to supervise the learning of weights in affinity graphs.
Our affinity supervision improves relationship recovery between objects, even without manually annotated relationship labels.
We show that affinity learning can also be applied to graphs built from mini-batches, for neural network training.
arXiv Detail & Related papers (2020-03-19T23:52:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.