Neural Gaussian Similarity Modeling for Differential Graph Structure
Learning
- URL: http://arxiv.org/abs/2312.09498v1
- Date: Fri, 15 Dec 2023 02:45:33 GMT
- Title: Neural Gaussian Similarity Modeling for Differential Graph Structure
Learning
- Authors: Xiaolong Fan and Maoguo Gong and Yue Wu and Zedong Tang and Jieyi Liu
- Abstract summary: We construct a differential graph structure learning model by replacing the non-differentiable nearest neighbor sampling with a differentiable sampling.
To alleviate this issue, the bell-shaped Gaussian Similarity (GauSim) modeling is proposed to sample non-nearest neighbors.
We develop a scalable method by transferring the large-scale graph to the transition graph to significantly reduce the complexity.
- Score: 24.582257964387402
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Structure Learning (GSL) has demonstrated considerable potential in the
analysis of graph-unknown non-Euclidean data across a wide range of domains.
However, constructing an end-to-end graph structure learning model poses a
challenge due to the impediment of gradient flow caused by the nearest neighbor
sampling strategy. In this paper, we construct a differential graph structure
learning model by replacing the non-differentiable nearest neighbor sampling
with a differentiable sampling using the reparameterization trick. Under this
framework, we argue that the act of sampling \mbox{nearest} neighbors may not
invariably be essential, particularly in instances where node features exhibit
a significant degree of similarity. To alleviate this issue, the bell-shaped
Gaussian Similarity (GauSim) modeling is proposed to sample non-nearest
neighbors. To adaptively model the similarity, we further propose Neural
Gaussian Similarity (NeuralGauSim) with learnable parameters featuring flexible
sampling behaviors. In addition, we develop a scalable method by transferring
the large-scale graph to the transition graph to significantly reduce the
complexity. Experimental results demonstrate the effectiveness of the proposed
methods.
Related papers
- A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Alleviating neighbor bias: augmenting graph self-supervise learning with
structural equivalent positive samples [1.0507062889290775]
We propose a signal-driven self-supervised method for graph representation learning.
It uses a topological information-guided structural equivalence sampling strategy.
The results show that the model performance can be effectively improved.
arXiv Detail & Related papers (2022-12-08T16:04:06Z) - Joint Network Topology Inference via a Shared Graphon Model [24.077455621015552]
We consider the problem of estimating the topology of multiple networks from nodal observations.
We adopt a graphon as our random graph model, which is a nonparametric model from which graphs of potentially different sizes can be drawn.
arXiv Detail & Related papers (2022-09-17T02:38:58Z) - Deep Manifold Learning with Graph Mining [80.84145791017968]
We propose a novel graph deep model with a non-gradient decision layer for graph mining.
The proposed model has achieved state-of-the-art performance compared to the current models.
arXiv Detail & Related papers (2022-07-18T04:34:08Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - An Interpretable Graph Generative Model with Heterophily [38.59200985962146]
We propose the first edge-independent graph generative model that is expressive enough to capture heterophily.
Our experiments demonstrate the effectiveness of our model for a variety of important application tasks.
arXiv Detail & Related papers (2021-11-04T17:34:39Z) - Regularization of Mixture Models for Robust Principal Graph Learning [0.0]
A regularized version of Mixture Models is proposed to learn a principal graph from a distribution of $D$-dimensional data points.
Parameters of the model are iteratively estimated through an Expectation-Maximization procedure.
arXiv Detail & Related papers (2021-06-16T18:00:02Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Permutation Invariant Graph Generation via Score-Based Generative
Modeling [114.12935776726606]
We propose a permutation invariant approach to modeling graphs, using the recent framework of score-based generative modeling.
In particular, we design a permutation equivariant, multi-channel graph neural network to model the gradient of the data distribution at the input graph.
For graph generation, we find that our learning approach achieves better or comparable results to existing models on benchmark datasets.
arXiv Detail & Related papers (2020-03-02T03:06:14Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.