A Causal Disentangled Multi-Granularity Graph Classification Method
- URL: http://arxiv.org/abs/2310.16256v1
- Date: Wed, 25 Oct 2023 00:20:50 GMT
- Title: A Causal Disentangled Multi-Granularity Graph Classification Method
- Authors: Yuan Li, Li Liu, Penggang Chen, Youmin Zhang, Guoyin Wang
- Abstract summary: Some graph classification methods do not combine the multi-granularity characteristics of graph data.
This paper proposes a causal disentangled multi-granularity graph representation learning method (CDM-GNN) to solve this challenge.
The model exhibits strong classification performance and generates explanatory outcomes aligning with human cognitive patterns.
- Score: 18.15154299104419
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph data widely exists in real life, with large amounts of data and complex
structures. It is necessary to map graph data to low-dimensional embedding.
Graph classification, a critical graph task, mainly relies on identifying the
important substructures within the graph. At present, some graph classification
methods do not combine the multi-granularity characteristics of graph data.
This lack of granularity distinction in modeling leads to a conflation of key
information and false correlations within the model. So, achieving the desired
goal of a credible and interpretable model becomes challenging. This paper
proposes a causal disentangled multi-granularity graph representation learning
method (CDM-GNN) to solve this challenge. The CDM-GNN model disentangles the
important substructures and bias parts within the graph from a
multi-granularity perspective. The disentanglement of the CDM-GNN model reveals
important and bias parts, forming the foundation for its classification task,
specifically, model interpretations. The CDM-GNN model exhibits strong
classification performance and generates explanatory outcomes aligning with
human cognitive patterns. In order to verify the effectiveness of the model,
this paper compares the three real-world datasets MUTAG, PTC, and IMDM-M. Six
state-of-the-art models, namely GCN, GAT, Top-k, ASAPool, SUGAR, and SAT are
employed for comparison purposes. Additionally, a qualitative analysis of the
interpretation results is conducted.
Related papers
- Introducing Diminutive Causal Structure into Graph Representation Learning [19.132025125620274]
We introduce a novel method that enables Graph Neural Networks (GNNs) to glean insights from specialized diminutive causal structures.
Our method specifically extracts causal knowledge from the model representation of these diminutive causal structures.
arXiv Detail & Related papers (2024-06-13T00:18:20Z) - Latent Graph Inference using Product Manifolds [0.0]
We generalize the discrete Differentiable Graph Module (dDGM) for latent graph learning.
Our novel approach is tested on a wide range of datasets, and outperforms the original dDGM model.
arXiv Detail & Related papers (2022-11-26T22:13:06Z) - Generating the Graph Gestalt: Kernel-Regularized Graph Representation
Learning [47.506013386710954]
A complete scientific understanding of graph data should address both global and local structure.
We propose a joint model for both as complementary objectives in a graph VAE framework.
Our experiments demonstrate a significant improvement in the realism of the generated graph structures, typically by 1-2 orders of magnitude of graph structure metrics.
arXiv Detail & Related papers (2021-06-29T10:48:28Z) - A Deep Latent Space Model for Graph Representation Learning [10.914558012458425]
We propose a Deep Latent Space Model (DLSM) for directed graphs to incorporate the traditional latent variable based generative model into deep learning frameworks.
Our proposed model consists of a graph convolutional network (GCN) encoder and a decoder, which are layer-wise connected by a hierarchical variational auto-encoder architecture.
Experiments on real-world datasets show that the proposed model achieves the state-of-the-art performances on both link prediction and community detection tasks.
arXiv Detail & Related papers (2021-06-22T12:41:19Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Graph Classification by Mixture of Diverse Experts [67.33716357951235]
We present GraphDIVE, a framework leveraging mixture of diverse experts for imbalanced graph classification.
With a divide-and-conquer principle, GraphDIVE employs a gating network to partition an imbalanced graph dataset into several subsets.
Experiments on real-world imbalanced graph datasets demonstrate the effectiveness of GraphDIVE.
arXiv Detail & Related papers (2021-03-29T14:03:03Z) - Multilayer Clustered Graph Learning [66.94201299553336]
We use contrastive loss as a data fidelity term, in order to properly aggregate the observed layers into a representative graph.
Experiments show that our method leads to a clustered clusters w.r.t.
We learn a clustering algorithm for solving clustering problems.
arXiv Detail & Related papers (2020-10-29T09:58:02Z) - Adaptive Graph Auto-Encoder for General Data Clustering [90.8576971748142]
Graph-based clustering plays an important role in the clustering area.
Recent studies about graph convolution neural networks have achieved impressive success on graph type data.
We propose a graph auto-encoder for general data clustering, which constructs the graph adaptively according to the generative perspective of graphs.
arXiv Detail & Related papers (2020-02-20T10:11:28Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z) - GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks [45.824642013383944]
Graph neural networks (GNN) were shown to be successful in effectively representing graph structured data.
We propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso.
arXiv Detail & Related papers (2020-01-17T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.