Diversified Multiscale Graph Learning with Graph Self-Correction
- URL: http://arxiv.org/abs/2103.09754v1
- Date: Wed, 17 Mar 2021 16:22:24 GMT
- Title: Diversified Multiscale Graph Learning with Graph Self-Correction
- Authors: Yuzhao Chen, Yatao Bian, Jiying Zhang, Xi Xiao, Tingyang Xu, Yu Rong,
Junzhou Huang
- Abstract summary: We propose a diversified multiscale graph learning model equipped with two core ingredients.
A graph self-correction (GSC) mechanism to generate informative embedded graphs, and a diversity boosting regularizer (DBR) to achieve a comprehensive characterization of the input graph.
Experiments on popular graph classification benchmarks show that the proposed GSC mechanism leads to significant improvements over state-of-the-art graph pooling methods.
- Score: 55.43696999424127
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though the multiscale graph learning techniques have enabled advanced feature
extraction frameworks, the classic ensemble strategy may show inferior
performance while encountering the high homogeneity of the learnt
representation, which is caused by the nature of existing graph pooling
methods. To cope with this issue, we propose a diversified multiscale graph
learning model equipped with two core ingredients: a graph self-correction
(GSC) mechanism to generate informative embedded graphs, and a diversity
boosting regularizer (DBR) to achieve a comprehensive characterization of the
input graph. The proposed GSC mechanism compensates the pooled graph with the
lost information during the graph pooling process by feeding back the estimated
residual graph, which serves as a plug-in component for popular graph pooling
methods. Meanwhile, pooling methods enhanced with the GSC procedure encourage
the discrepancy of node embeddings, and thus it contributes to the success of
ensemble learning strategy. The proposed DBR instead enhances the ensemble
diversity at the graph-level embeddings by leveraging the interaction among
individual classifiers. Extensive experiments on popular graph classification
benchmarks show that the proposed GSC mechanism leads to significant
improvements over state-of-the-art graph pooling methods. Moreover, the
ensemble multiscale graph learning models achieve superior enhancement by
combining both GSC and DBR.
Related papers
- Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Cross-View Graph Consistency Learning for Invariant Graph
Representations [16.007232280413806]
We propose a cross-view graph consistency learning (CGCL) method that learns invariant graph representations for link prediction.
This paper empirically and experimentally demonstrates the effectiveness of the proposed CGCL method.
arXiv Detail & Related papers (2023-11-20T14:58:47Z) - Transforming Graphs for Enhanced Attribute Clustering: An Innovative
Graph Transformer-Based Method [8.989218350080844]
This study introduces an innovative method known as the Graph Transformer Auto-Encoder for Graph Clustering (GTAGC)
By melding the Graph Auto-Encoder with the Graph Transformer, GTAGC is adept at capturing global dependencies between nodes.
The architecture of GTAGC encompasses graph embedding, integration of the Graph Transformer within the autoencoder structure, and a clustering component.
arXiv Detail & Related papers (2023-06-20T06:04:03Z) - Towards Relation-centered Pooling and Convolution for Heterogeneous
Graph Learning Networks [11.421162988355146]
Heterogeneous graph neural network has unleashed great potential on graph representation learning.
We design a relation-centered Pooling and Convolution for Heterogeneous Graph learning Network, namely PC-HGN, to enable relation-specific sampling and cross-relation convolutions.
We evaluate the performance of the proposed model by comparing with state-of-the-art graph learning models on three different real-world datasets.
arXiv Detail & Related papers (2022-10-31T08:43:32Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Effective and Efficient Graph Learning for Multi-view Clustering [173.8313827799077]
We propose an effective and efficient graph learning model for multi-view clustering.
Our method exploits the view-similar between graphs of different views by the minimization of tensor Schatten p-norm.
Our proposed algorithm is time-economical and obtains the stable results and scales well with the data size.
arXiv Detail & Related papers (2021-08-15T13:14:28Z) - Hierarchical Adaptive Pooling by Capturing High-order Dependency for
Graph Representation Learning [18.423192209359158]
Graph neural networks (GNN) have been proven to be mature enough for handling graph-structured data on node-level graph representation learning tasks.
This paper proposes a hierarchical graph-level representation learning framework, which is adaptively sensitive to graph structures.
arXiv Detail & Related papers (2021-04-13T06:22:24Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.