LGD-GCN: Local and Global Disentangled Graph Convolutional Networks
- URL: http://arxiv.org/abs/2104.11893v3
- Date: Thu, 14 Dec 2023 14:48:01 GMT
- Title: LGD-GCN: Local and Global Disentangled Graph Convolutional Networks
- Authors: Jingwei Guo, Kaizhu Huang, Xinping Yi, Rui Zhang
- Abstract summary: Disentangled Graph Convolutional Network (DisenGCN) is an encouraging framework to disentangle the latent factors arising in a real-world graph.
We introduce a novel Global Disentangled Graph Convolutional Network (LGD-GCN) to capture both local and global information for graph disentanglement.
- Score: 35.71362724342354
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disentangled Graph Convolutional Network (DisenGCN) is an encouraging
framework to disentangle the latent factors arising in a real-world graph.
However, it relies on disentangling information heavily from a local range
(i.e., a node and its 1-hop neighbors), while the local information in many
cases can be uneven and incomplete, hindering the interpretabiliy power and
model performance of DisenGCN. In this paper\footnote{This paper is a lighter
version of \href{https://jingweio.github.io/assets/pdf/tnnls22.pdf}{"Learning
Disentangled Graph Convolutional Networks Locally and Globally"} where the
results and analysis have been reworked substantially. Digital Object
Identifier \url{https://doi.org/10.1109/TNNLS.2022.3195336}.}, we introduce a
novel Local and Global Disentangled Graph Convolutional Network (LGD-GCN) to
capture both local and global information for graph disentanglement. LGD-GCN
performs a statistical mixture modeling to derive a factor-aware latent
continuous space, and then constructs different structures w.r.t. different
factors from the revealed space. In this way, the global factor-specific
information can be efficiently and selectively encoded via a message passing
along these built structures, strengthening the intra-factor consistency. We
also propose a novel diversity promoting regularizer employed with the latent
space modeling, to encourage inter-factor diversity. Evaluations of the
proposed LGD-GCN on the synthetic and real-world datasets show a better
interpretability and improved performance in node classification over the
existing competitive models. Code is available at
\url{https://github.com/jingweio/LGD-GCN}.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - FedGT: Federated Node Classification with Scalable Graph Transformer [27.50698154862779]
We propose a scalable textbfFederated textbfGraph textbfTransformer (textbfFedGT) in the paper.
FedGT computes clients' similarity based on the aligned global nodes with optimal transport.
arXiv Detail & Related papers (2024-01-26T21:02:36Z) - Balancing between the Local and Global Structures (LGS) in Graph
Embedding [1.4732811715354455]
We present a method for balancing between the Local and Global Structures (LGS) in graph embedding, via a tunable parameter.
We evaluate the performance of LGS with synthetic and real-world datasets and our results indicate that it is competitive with the state-of-the-art methods.
arXiv Detail & Related papers (2023-08-31T02:12:46Z) - LSGNN: Towards General Graph Neural Network in Node Classification by
Local Similarity [59.41119013018377]
We propose to use the local similarity (LocalSim) to learn node-level weighted fusion, which can also serve as a plug-and-play module.
For better fusion, we propose a novel and efficient Initial Residual Difference Connection (IRDC) to extract more informative multi-hop information.
Our proposed method, namely Local Similarity Graph Neural Network (LSGNN), can offer comparable or superior state-of-the-art performance on both homophilic and heterophilic graphs.
arXiv Detail & Related papers (2023-05-07T09:06:11Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - Node-wise Localization of Graph Neural Networks [52.04194209002702]
Graph neural networks (GNNs) emerge as a powerful family of representation learning models on graphs.
We propose a node-wise localization of GNNs by accounting for both global and local aspects of the graph.
We conduct extensive experiments on four benchmark graphs, and consistently obtain promising performance surpassing the state-of-the-art GNNs.
arXiv Detail & Related papers (2021-10-27T10:02:03Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z) - Multi-Level Graph Convolutional Network with Automatic Graph Learning
for Hyperspectral Image Classification [63.56018768401328]
We propose a Multi-level Graph Convolutional Network (GCN) with Automatic Graph Learning method (MGCN-AGL) for HSI classification.
By employing attention mechanism to characterize the importance among spatially neighboring regions, the most relevant information can be adaptively incorporated to make decisions.
Our MGCN-AGL encodes the long range dependencies among image regions based on the expressive representations that have been produced at local level.
arXiv Detail & Related papers (2020-09-19T09:26:20Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.