Layer-stacked Attention for Heterogeneous Network Embedding
- URL: http://arxiv.org/abs/2009.08072v1
- Date: Thu, 17 Sep 2020 05:13:41 GMT
- Title: Layer-stacked Attention for Heterogeneous Network Embedding
- Authors: Nhat Tran, Jean Gao
- Abstract summary: Layer-stacked ATTention Embedding (LATTE) is an architecture that automatically decomposes higher-order meta relations at each layer.
LATTE offers a more interpretable aggregation scheme for nodes of different types at different neighborhood ranges.
In both transductive and inductive node classification tasks, LATTE can achieve state-of-the-art performance compared to existing approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The heterogeneous network is a robust data abstraction that can model
entities of different types interacting in various ways. Such heterogeneity
brings rich semantic information but presents nontrivial challenges in
aggregating the heterogeneous relationships between objects - especially those
of higher-order indirect relations. Recent graph neural network approaches for
representation learning on heterogeneous networks typically employ the
attention mechanism, which is often only optimized for predictions based on
direct links. Furthermore, even though most deep learning methods can aggregate
higher-order information by building deeper models, such a scheme can diminish
the degree of interpretability. To overcome these challenges, we explore an
architecture - Layer-stacked ATTention Embedding (LATTE) - that automatically
decomposes higher-order meta relations at each layer to extract the relevant
heterogeneous neighborhood structures for each node. Additionally, by
successively stacking layer representations, the learned node embedding offers
a more interpretable aggregation scheme for nodes of different types at
different neighborhood ranges. We conducted experiments on several benchmark
heterogeneous network datasets. In both transductive and inductive node
classification tasks, LATTE can achieve state-of-the-art performance compared
to existing approaches, all while offering a lightweight model. With extensive
experimental analyses and visualizations, the framework can demonstrate the
ability to extract informative insights on heterogeneous networks.
Related papers
- Flexible inference in heterogeneous and attributed multilayer networks [21.349513661012498]
We develop a probabilistic generative model to perform inference in multilayer networks with arbitrary types of information.
We demonstrate its ability to unveil a variety of patterns in a social support network among villagers in rural India.
arXiv Detail & Related papers (2024-05-31T15:21:59Z) - FMGNN: Fused Manifold Graph Neural Network [102.61136611255593]
Graph representation learning has been widely studied and demonstrated effectiveness in various graph tasks.
We propose the Fused Manifold Graph Neural Network (NN), a novel GNN architecture that embeds graphs into different Manifolds during training.
Our experiments demonstrate that NN yields superior performance over strong baselines on the benchmarks of node classification and link prediction tasks.
arXiv Detail & Related papers (2023-04-03T15:38:53Z) - Multiplex Heterogeneous Graph Convolutional Network [25.494590588212542]
This work proposes a Multiplex Heterogeneous Graph Convolutional Network (MHGCN) for heterogeneous network embedding.
Our MHGCN can automatically learn the useful heterogeneous meta-path interactions of different lengths in multiplex heterogeneous networks.
arXiv Detail & Related papers (2022-08-12T06:17:54Z) - HybridGNN: Learning Hybrid Representation in Multiplex Heterogeneous
Networks [26.549559266395775]
We propose HybridGNN, an end-to-end graph neural network model with hybrid aggregation flows and hierarchical attentions.
We show that HybridGNN achieves the best performance compared to several state-of-the-art baselines.
arXiv Detail & Related papers (2022-08-03T13:39:47Z) - SHGNN: Structure-Aware Heterogeneous Graph Neural Network [77.78459918119536]
This paper proposes a novel Structure-Aware Heterogeneous Graph Neural Network (SHGNN) to address the above limitations.
We first utilize a feature propagation module to capture the local structure information of intermediate nodes in the meta-path.
Next, we use a tree-attention aggregator to incorporate the graph structure information into the aggregation module on the meta-path.
Finally, we leverage a meta-path aggregator to fuse the information aggregated from different meta-paths.
arXiv Detail & Related papers (2021-12-12T14:18:18Z) - DisenHAN: Disentangled Heterogeneous Graph Attention Network for
Recommendation [11.120241862037911]
Heterogeneous information network has been widely used to alleviate sparsity and cold start problems in recommender systems.
We propose a novel disentangled heterogeneous graph attention network DisenHAN for top-$N$ recommendation.
arXiv Detail & Related papers (2021-06-21T06:26:10Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - Reinforced Neighborhood Selection Guided Multi-Relational Graph Neural
Networks [68.9026534589483]
RioGNN is a novel Reinforced, recursive and flexible neighborhood selection guided multi-relational Graph Neural Network architecture.
RioGNN can learn more discriminative node embedding with enhanced explainability due to the recognition of individual importance of each relation.
arXiv Detail & Related papers (2021-04-16T04:30:06Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Learning the Implicit Semantic Representation on Graph-Structured Data [57.670106959061634]
Existing representation learning methods in graph convolutional networks are mainly designed by describing the neighborhood of each node as a perceptual whole.
We propose a Semantic Graph Convolutional Networks (SGCN) that explores the implicit semantics by learning latent semantic-paths in graphs.
arXiv Detail & Related papers (2021-01-16T16:18:43Z) - GAHNE: Graph-Aggregated Heterogeneous Network Embedding [32.44836376873812]
Heterogeneous network embedding aims to embed nodes into low-dimensional vectors which capture rich intrinsic information of heterogeneous networks.
Existing models either depend on manually designing meta-paths, ignore mutual effects between different semantics, or omit some aspects of information from global networks.
In GAHNE model, we develop several mechanisms that can aggregate semantic representations from different single-type sub-networks as well as fuse the global information into final embeddings.
arXiv Detail & Related papers (2020-12-23T07:11:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.