An Invertible Graph Diffusion Neural Network for Source Localization
- URL: http://arxiv.org/abs/2206.09214v1
- Date: Sat, 18 Jun 2022 14:35:27 GMT
- Title: An Invertible Graph Diffusion Neural Network for Source Localization
- Authors: Junxiang Wang, Junji Jiang, and Liang Zhao
- Abstract summary: This paper aims to establish a generic framework of invertible graph diffusion models for source localization on graphs.
Specifically, we propose a graph residual scenario to make existing graph diffusion models invertible with theoretical guarantees.
We also develop a novel error compensation mechanism that learns to offset the errors of the inferred sources.
- Score: 8.811725212252544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Localizing the source of graph diffusion phenomena, such as misinformation
propagation, is an important yet extremely challenging task. Existing source
localization models typically are heavily dependent on the hand-crafted rules.
Unfortunately, a large portion of the graph diffusion process for many
applications is still unknown to human beings so it is important to have
expressive models for learning such underlying rules automatically. This paper
aims to establish a generic framework of invertible graph diffusion models for
source localization on graphs, namely Invertible Validity-aware Graph Diffusion
(IVGD), to handle major challenges including 1) Difficulty to leverage
knowledge in graph diffusion models for modeling their inverse processes in an
end-to-end fashion, 2) Difficulty to ensure the validity of the inferred
sources, and 3) Efficiency and scalability in source inference. Specifically,
first, to inversely infer sources of graph diffusion, we propose a graph
residual scenario to make existing graph diffusion models invertible with
theoretical guarantees; second, we develop a novel error compensation mechanism
that learns to offset the errors of the inferred sources. Finally, to ensure
the validity of the inferred sources, a new set of validity-aware layers have
been devised to project inferred sources to feasible regions by flexibly
encoding constraints with unrolled optimization techniques. A linearization
technique is proposed to strengthen the efficiency of our proposed layers. The
convergence of the proposed IVGD is proven theoretically. Extensive experiments
on nine real-world datasets demonstrate that our proposed IVGD outperforms
state-of-the-art comparison methods significantly. We have released our code at
https://github.com/xianggebenben/IVGD.
Related papers
- GALA: Graph Diffusion-based Alignment with Jigsaw for Source-free Domain Adaptation [13.317620250521124]
Source-free domain adaptation is a crucial machine learning topic, as it contains numerous applications in the real world.
Recent graph neural network (GNN) approaches can suffer from serious performance decline due to domain shift and label scarcity.
We propose a novel method named Graph Diffusion-based Alignment with Jigsaw (GALA), tailored for source-free graph domain adaptation.
arXiv Detail & Related papers (2024-10-22T01:32:46Z) - Text-to-Image Rectified Flow as Plug-and-Play Priors [52.586838532560755]
Rectified flow is a novel class of generative models that enforces a linear progression from the source to the target distribution.
We show that rectified flow approaches surpass in terms of generation quality and efficiency, requiring fewer inference steps.
Our method also displays competitive performance in image inversion and editing.
arXiv Detail & Related papers (2024-06-05T14:02:31Z) - Multiple-Source Localization from a Single-Snapshot Observation Using Graph Bayesian Optimization [10.011338977476804]
Multi-source localization from a single snap-shot observation is especially relevant due to its prevalence.
Current methods typically utilizes and greedy selection, and they are usually bonded with one diffusion model.
We propose a simulation-based method termed BOSouL to approximate the results for its sample efficiency.
arXiv Detail & Related papers (2024-03-25T14:46:24Z) - Self-Play Fine-Tuning of Diffusion Models for Text-to-Image Generation [59.184980778643464]
Fine-tuning Diffusion Models remains an underexplored frontier in generative artificial intelligence (GenAI)
In this paper, we introduce an innovative technique called self-play fine-tuning for diffusion models (SPIN-Diffusion)
Our approach offers an alternative to conventional supervised fine-tuning and RL strategies, significantly improving both model performance and alignment.
arXiv Detail & Related papers (2024-02-15T18:59:18Z) - Leveraging Graph Diffusion Models for Network Refinement Tasks [72.54590628084178]
We propose a novel graph generative framework, SGDM, based on subgraph diffusion.
Our framework not only improves the scalability and fidelity of graph diffusion models, but also leverages the reverse process to perform novel, conditional generation tasks.
arXiv Detail & Related papers (2023-11-29T18:02:29Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Two-stage Denoising Diffusion Model for Source Localization in Graph
Inverse Problems [19.57064597050846]
Source localization is the inverse problem of graph information dissemination.
We propose a two-stage optimization framework, the source localization denoising diffusion model (SL-Diff)
SL-Diff yields excellent prediction results within a reasonable sampling time at extensive experiments.
arXiv Detail & Related papers (2023-04-18T09:11:09Z) - Fast Graph Generative Model via Spectral Diffusion [38.31052833073743]
We argue that running full-rank diffusion SDEs on the whole space hinders diffusion models from learning graph topology generation.
We propose an efficient yet effective Graph Spectral Diffusion Model (GSDM), which is driven by low-rank diffusion SDEs on the graph spectrum space.
arXiv Detail & Related papers (2022-11-16T12:56:32Z) - Source Localization of Graph Diffusion via Variational Autoencoders for
Graph Inverse Problems [8.984898754363265]
Source localization, as the inverse problem of graph diffusion, is extremely challenging.
This paper focuses on a probabilistic manner to account for the uncertainty of different candidate sources.
Experiments are conducted on 7 real-world datasets to demonstrate the superiority of SL-VAE in reconstructing the diffusion sources.
arXiv Detail & Related papers (2022-06-24T14:56:45Z) - Handling Distribution Shifts on Graphs: An Invariance Perspective [78.31180235269035]
We formulate the OOD problem on graphs and develop a new invariant learning approach, Explore-to-Extrapolate Risk Minimization (EERM)
EERM resorts to multiple context explorers that are adversarially trained to maximize the variance of risks from multiple virtual environments.
We prove the validity of our method by theoretically showing its guarantee of a valid OOD solution.
arXiv Detail & Related papers (2022-02-05T02:31:01Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.