Representation Learning on Heterostructures via Heterogeneous Anonymous
Walks
- URL: http://arxiv.org/abs/2201.06972v1
- Date: Tue, 18 Jan 2022 13:28:42 GMT
- Title: Representation Learning on Heterostructures via Heterogeneous Anonymous
Walks
- Authors: Xuan Guo, Pengfei Jiao, Ting Pan, Wang Zhang, Mengyu Jia, Danyang Shi,
Wenjun Wang
- Abstract summary: We take the first step for representation learning on heterostructures, which is very challenging due to their highly diverse combinations of node types and underlying structures.
We propose a theoretically guaranteed technique called heterogeneous anonymous walk embedding (HAW) and its variant coarse HAW (CHAW)
We then devise the heterogeneous anonymous walk embedding (HAWE) and its variant coarse HAWE in a data-driven manner to circumvent using an extremely large number of possible walks and train embeddings.
- Score: 9.94967091840104
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Capturing structural similarity has been a hot topic in the field of network
embedding recently due to its great help in understanding the node functions
and behaviors. However, existing works have paid very much attention to
learning structures on homogeneous networks while the related study on
heterogeneous networks is still a void. In this paper, we try to take the first
step for representation learning on heterostructures, which is very challenging
due to their highly diverse combinations of node types and underlying
structures. To effectively distinguish diverse heterostructures, we firstly
propose a theoretically guaranteed technique called heterogeneous anonymous
walk (HAW) and its variant coarse HAW (CHAW). Then, we devise the heterogeneous
anonymous walk embedding (HAWE) and its variant coarse HAWE in a data-driven
manner to circumvent using an extremely large number of possible walks and
train embeddings by predicting occurring walks in the neighborhood of each
node. Finally, we design and apply extensive and illustrative experiments on
synthetic and real-world networks to build a benchmark on heterostructure
learning and evaluate the effectiveness of our methods. The results demonstrate
our methods achieve outstanding performance compared with both homogeneous and
heterogeneous classic methods, and can be applied on large-scale networks.
Related papers
- Leveraging Invariant Principle for Heterophilic Graph Structure Distribution Shifts [42.77503881972965]
Heterophilic Graph Neural Networks (HGNNs) have shown promising results for semi-supervised learning tasks on graphs.
How to learn invariant node representations on heterophilic graphs to handle this structure difference or distribution shifts remains unexplored.
We propose textbfHEI, a framework capable of generating invariant node representations through incorporating heterophily information.
arXiv Detail & Related papers (2024-08-18T14:10:34Z) - Learning Invariant Representations of Graph Neural Networks via Cluster
Generalization [58.68231635082891]
Graph neural networks (GNNs) have become increasingly popular in modeling graph-structured data.
In this paper, we experimentally find that the performance of GNNs drops significantly when the structure shift happens.
We propose the Cluster Information Transfer (CIT) mechanism, which can learn invariant representations for GNNs.
arXiv Detail & Related papers (2024-03-06T10:36:56Z) - Enhancing Representations through Heterogeneous Self-Supervised Learning [61.40674648939691]
We propose Heterogeneous Self-Supervised Learning (HSSL), which enforces a base model to learn from an auxiliary head whose architecture is heterogeneous from the base model.
The HSSL endows the base model with new characteristics in a representation learning way without structural changes.
The HSSL is compatible with various self-supervised methods, achieving superior performances on various downstream tasks.
arXiv Detail & Related papers (2023-10-08T10:44:05Z) - Demystifying Structural Disparity in Graph Neural Networks: Can One Size
Fit All? [61.35457647107439]
Most real-world homophilic and heterophilic graphs are comprised of a mixture of nodes in both homophilic and heterophilic structural patterns.
We provide evidence that Graph Neural Networks(GNNs) on node classification typically perform admirably on homophilic nodes.
We then propose a rigorous, non-i.i.d PAC-Bayesian generalization bound for GNNs, revealing reasons for the performance disparity.
arXiv Detail & Related papers (2023-06-02T07:46:20Z) - Hierarchical Contrastive Learning Enhanced Heterogeneous Graph Neural
Network [59.860534520941485]
Heterogeneous graph neural networks (HGNNs) as an emerging technique have shown superior capacity of dealing with heterogeneous information network (HIN)
Recently, contrastive learning, a self-supervised method, becomes one of the most exciting learning paradigms and shows great potential when there are no labels.
In this paper, we study the problem of self-supervised HGNNs and propose a novel co-contrastive learning mechanism for HGNNs, named HeCo.
arXiv Detail & Related papers (2023-04-24T16:17:21Z) - Revisiting Heterophily in Graph Convolution Networks by Learning
Representations Across Topological and Feature Spaces [20.775165967590173]
Graph convolution networks (GCNs) have been enormously successful in learning representations over several graph-based machine learning tasks.
We argue that by learning graph representations across two spaces i.e., topology and feature space GCNs can address heterophily.
We experimentally demonstrate the performance of the proposed GCN framework over semi-supervised node classification task.
arXiv Detail & Related papers (2022-11-01T16:21:10Z) - Powerful Graph Convolutioal Networks with Adaptive Propagation Mechanism
for Homophily and Heterophily [38.50800951799888]
Graph Convolutional Networks (GCNs) have been widely applied in various fields due to their significant power on processing graph-structured data.
Existing methods deal with heterophily by mainly aggregating higher-order neighborhoods or combing the immediate representations.
This paper proposes a novel propagation mechanism, which can automatically change the propagation and aggregation process according to homophily or heterophily.
arXiv Detail & Related papers (2021-12-27T08:19:23Z) - Unsupervised Domain-adaptive Hash for Networks [81.49184987430333]
Domain-adaptive hash learning has enjoyed considerable success in the computer vision community.
We develop an unsupervised domain-adaptive hash learning method for networks, dubbed UDAH.
arXiv Detail & Related papers (2021-08-20T12:09:38Z) - Meta-Path-Free Representation Learning on Heterogeneous Networks [5.106061955284303]
We propose a novel meta-path-free representation learning on heterogeneous networks, namely Heterogeneous graph Convolutional Networks (HCN)
The proposed method fuses the heterogeneous and develops a $k$-strata algorithm ($k$ is an integer) to capture the $k$-hop structural and semantic information.
The experimental results demonstrate that the proposed method significantly outperforms the current state-of-the-art methods in a variety of analytic tasks.
arXiv Detail & Related papers (2021-02-16T12:37:38Z) - Layer-stacked Attention for Heterogeneous Network Embedding [0.0]
Layer-stacked ATTention Embedding (LATTE) is an architecture that automatically decomposes higher-order meta relations at each layer.
LATTE offers a more interpretable aggregation scheme for nodes of different types at different neighborhood ranges.
In both transductive and inductive node classification tasks, LATTE can achieve state-of-the-art performance compared to existing approaches.
arXiv Detail & Related papers (2020-09-17T05:13:41Z) - A Multi-Semantic Metapath Model for Large Scale Heterogeneous Network
Representation Learning [52.83948119677194]
We propose a multi-semantic metapath (MSM) model for large scale heterogeneous representation learning.
Specifically, we generate multi-semantic metapath-based random walks to construct the heterogeneous neighborhood to handle the unbalanced distributions.
We conduct systematical evaluations for the proposed framework on two challenging datasets: Amazon and Alibaba.
arXiv Detail & Related papers (2020-07-19T22:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.