Latent Conditional Diffusion-based Data Augmentation for Continuous-Time Dynamic Graph Model
- URL: http://arxiv.org/abs/2407.08500v2
- Date: Sat, 20 Jul 2024 06:44:09 GMT
- Title: Latent Conditional Diffusion-based Data Augmentation for Continuous-Time Dynamic Graph Model
- Authors: Yuxing Tian, Yiyan Qi, Aiwen Jiang, Qi Huang, Jian Guo,
- Abstract summary: Continuous-Time Dynamic Graph (CTDG) precisely models evolving real-world relationships.
Existing CTDG models encounter challenges stemming from noise and limited historical data.
We propose Conda, a novel latent diffusion-based Graph Data Augmentation method tailored for CTDGs.
- Score: 6.940445452557734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous-Time Dynamic Graph (CTDG) precisely models evolving real-world relationships, drawing heightened interest in dynamic graph learning across academia and industry. However, existing CTDG models encounter challenges stemming from noise and limited historical data. Graph Data Augmentation (GDA) emerges as a critical solution, yet current approaches primarily focus on static graphs and struggle to effectively address the dynamics inherent in CTDGs. Moreover, these methods often demand substantial domain expertise for parameter tuning and lack theoretical guarantees for augmentation efficacy. To address these issues, we propose Conda, a novel latent diffusion-based GDA method tailored for CTDGs. Conda features a sandwich-like architecture, incorporating a Variational Auto-Encoder (VAE) and a conditional diffusion model, aimed at generating enhanced historical neighbor embeddings for target nodes. Unlike conventional diffusion models trained on entire graphs via pre-training, Conda requires historical neighbor sequence embeddings of target nodes for training, thus facilitating more targeted augmentation. We integrate Conda into the CTDG model and adopt an alternating training strategy to optimize performance. Extensive experimentation across six widely used real-world datasets showcases the consistent performance improvement of our approach, particularly in scenarios with limited historical data.
Related papers
- DeFoG: Discrete Flow Matching for Graph Generation [45.037260759871124]
We propose DeFoG, a novel framework using discrete flow matching for graph generation.
DeFoG employs a flow-based approach that features an efficient linear noising process and a flexible denoising process.
We show that DeFoG achieves state-of-the-art results on synthetic and molecular datasets.
arXiv Detail & Related papers (2024-10-05T18:52:54Z) - DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs [59.434893231950205]
Dynamic graph learning aims to uncover evolutionary laws in real-world systems.
We propose DyG-Mamba, a new continuous state space model for dynamic graph learning.
We show that DyG-Mamba achieves state-of-the-art performance on most datasets.
arXiv Detail & Related papers (2024-08-13T15:21:46Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - CPDG: A Contrastive Pre-Training Method for Dynamic Graph Neural
Networks [21.79251709065902]
We propose Contrastive Pre-Training Method for Dynamic Graph Neural Networks (CPDG)
CPDG tackles the challenges of pre-training for DGNNs, including generalization capability and long-short term modeling capability.
Extensive experiments conducted on both large-scale research and industrial dynamic graph datasets.
arXiv Detail & Related papers (2023-07-06T07:18:22Z) - Phased Data Augmentation for Training a Likelihood-Based Generative Model with Limited Data [0.0]
Generative models excel in creating realistic images, yet their dependency on extensive datasets for training presents significant challenges.
Current data-efficient methods largely focus on GAN architectures, leaving a gap in training other types of generative models.
"phased data augmentation" is a novel technique that addresses this gap by optimizing training in limited data scenarios without altering the inherent data distribution.
arXiv Detail & Related papers (2023-05-22T03:38:59Z) - Unbiased Scene Graph Generation in Videos [36.889659781604564]
We introduce TEMPURA: TEmporal consistency and Memory-guided UnceRtainty Attenuation for unbiased dynamic SGG.
TEMPURA employs object-level temporal consistencies via transformer sequence modeling, learns to synthesize unbiased relationship representations.
Our method achieves significant (up to 10% in some cases) performance gain over existing methods.
arXiv Detail & Related papers (2023-04-03T06:10:06Z) - Learning to Augment via Implicit Differentiation for Domain
Generalization [107.9666735637355]
Domain generalization (DG) aims to overcome the problem by leveraging multiple source domains to learn a domain-generalizable model.
In this paper, we propose a novel augmentation-based DG approach, dubbed AugLearn.
AugLearn shows effectiveness on three standard DG benchmarks, PACS, Office-Home and Digits-DG.
arXiv Detail & Related papers (2022-10-25T18:51:51Z) - Temporal Domain Generalization with Drift-Aware Dynamic Neural Network [12.483886657900525]
We propose a Temporal Domain Generalization with Drift-Aware Dynamic Neural Network (DRAIN) framework.
Specifically, we formulate the problem into a Bayesian framework that jointly models the relation between data and model dynamics.
It captures the temporal drift of model parameters and data distributions and can predict models in the future without the presence of future data.
arXiv Detail & Related papers (2022-05-21T20:01:31Z) - A Generic Approach for Enhancing GANs by Regularized Latent Optimization [79.00740660219256]
We introduce a generic framework called em generative-model inference that is capable of enhancing pre-trained GANs effectively and seamlessly.
Our basic idea is to efficiently infer the optimal latent distribution for the given requirements using Wasserstein gradient flow techniques.
arXiv Detail & Related papers (2021-12-07T05:22:50Z) - Causal Incremental Graph Convolution for Recommender System Retraining [89.25922726558875]
Real-world recommender system needs to be regularly retrained to keep with the new data.
In this work, we consider how to efficiently retrain graph convolution network (GCN) based recommender models.
arXiv Detail & Related papers (2021-08-16T04:20:09Z) - Regularizing Generative Adversarial Networks under Limited Data [88.57330330305535]
This work proposes a regularization approach for training robust GAN models on limited data.
We show a connection between the regularized loss and an f-divergence called LeCam-divergence, which we find is more robust under limited training data.
arXiv Detail & Related papers (2021-04-07T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.