Psycholinguistic Tripartite Graph Network for Personality Detection
- URL: http://arxiv.org/abs/2106.04963v1
- Date: Wed, 9 Jun 2021 10:18:50 GMT
- Title: Psycholinguistic Tripartite Graph Network for Personality Detection
- Authors: Tao Yang, Feifan Yang, Haolan Ouyang, Xiaojun Quan
- Abstract summary: We propose a psycholinguistic knowledge-based tripartite graph network, TrigNet.
The graph network injects structural psycholinguistic knowledge from LIWC, a computerized instrument for psycholinguistic analysis.
To reduce the computational cost in graph learning, we propose a novel flow graph attention network (GAT) that only transmits messages between neighboring parties.
- Score: 9.620830488893098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the recent work on personality detection from online posts adopts
multifarious deep neural networks to represent the posts and builds predictive
models in a data-driven manner, without the exploitation of psycholinguistic
knowledge that may unveil the connections between one's language usage and his
psychological traits. In this paper, we propose a psycholinguistic
knowledge-based tripartite graph network, TrigNet, which consists of a
tripartite graph network and a BERT-based graph initializer. The graph network
injects structural psycholinguistic knowledge from LIWC, a computerized
instrument for psycholinguistic analysis, by constructing a heterogeneous
tripartite graph. The graph initializer is employed to provide initial
embeddings for the graph nodes. To reduce the computational cost in graph
learning, we further propose a novel flow graph attention network (GAT) that
only transmits messages between neighboring parties in the tripartite graph.
Benefiting from the tripartite graph, TrigNet can aggregate post information
from a psychological perspective, which is a novel way of exploiting domain
knowledge. Extensive experiments on two datasets show that TrigNet outperforms
the existing state-of-art model by 3.47 and 2.10 points in average F1.
Moreover, the flow GAT reduces the FLOPS and Memory measures by 38% and 32%,
respectively, in comparison to the original GAT in our setting.
Related papers
- Knowledge Enhanced Graph Neural Networks for Graph Completion [0.0]
Knowledge Enhanced Graph Neural Networks (KeGNN) is a neuro-symbolic framework for graph completion.
KeGNN consists of a graph neural network as a base upon which knowledge enhancement layers are stacked.
We instantiate KeGNN in conjunction with two state-of-the-art graph neural networks, Graph Convolutional Networks and Graph Attention Networks.
arXiv Detail & Related papers (2023-03-27T07:53:43Z) - Affinity-Aware Graph Networks [9.888383815189176]
Graph Neural Networks (GNNs) have emerged as a powerful technique for learning on relational data.
We explore the use of affinity measures as features in graph neural networks.
We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks.
arXiv Detail & Related papers (2022-06-23T18:51:35Z) - Template based Graph Neural Network with Optimal Transport Distances [11.56532171513328]
Current Graph Neural Networks (GNN) architectures rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.
We propose in this work a novel point of view, which places distances to some learnable graph templates at the core of the graph representation.
This distance embedding is constructed thanks to an optimal transport distance: the Fused Gromov-Wasserstein (FGW) distance.
arXiv Detail & Related papers (2022-05-31T12:24:01Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Graph-in-Graph (GiG): Learning interpretable latent graphs in
non-Euclidean domain for biological and healthcare applications [52.65389473899139]
Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain.
Recent works have shown that considering relationships between input data samples have a positive regularizing effect for the downstream task.
We propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications.
arXiv Detail & Related papers (2022-04-01T10:01:37Z) - Self-supervised Consensus Representation Learning for Attributed Graph [15.729417511103602]
We introduce self-supervised learning mechanism to graph representation learning.
We propose a novel Self-supervised Consensus Representation Learning framework.
Our proposed SCRL method treats graph from two perspectives: topology graph and feature graph.
arXiv Detail & Related papers (2021-08-10T07:53:09Z) - Spectral Embedding of Graph Networks [76.27138343125985]
We introduce an unsupervised graph embedding that trades off local node similarity and connectivity, and global structure.
The embedding is based on a generalized graph Laplacian, whose eigenvectors compactly capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2020-09-30T04:59:10Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Graph Structure of Neural Networks [104.33754950606298]
We show how the graph structure of neural networks affect their predictive performance.
A "sweet spot" of relational graphs leads to neural networks with significantly improved predictive performance.
Top-performing neural networks have graph structure surprisingly similar to those of real biological neural networks.
arXiv Detail & Related papers (2020-07-13T17:59:31Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Machine Learning on Graphs: A Model and Comprehensive Taxonomy [22.73365477040205]
We bridge the gap between graph neural networks, network embedding and graph regularization models.
Specifically, we propose a Graph Decoder Model (GRAPHEDM), which generalizes popular algorithms for semi-supervised learning on graphs.
arXiv Detail & Related papers (2020-05-07T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.