EDGE++: Improved Training and Sampling of EDGE
- URL: http://arxiv.org/abs/2310.14441v2
- Date: Sat, 28 Oct 2023 16:53:48 GMT
- Title: EDGE++: Improved Training and Sampling of EDGE
- Authors: Mingyang Wu, Xiaohui Chen, Li-Ping Liu
- Abstract summary: We propose enhancements to the EDGE model to address these issues.
Specifically, we introduce a degree-specific noise schedule that optimize the number of active nodes at each timestep.
We also present an improved sampling scheme that fine-tunes the generative process, allowing for better control over the similarity between the synthesized and the true network.
- Score: 17.646159460584926
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently developed deep neural models like NetGAN, CELL, and Variational
Graph Autoencoders have made progress but face limitations in replicating key
graph statistics on generating large graphs. Diffusion-based methods have
emerged as promising alternatives, however, most of them present challenges in
computational efficiency and generative performance. EDGE is effective at
modeling large networks, but its current denoising approach can be inefficient,
often leading to wasted computational resources and potential mismatches in its
generation process. In this paper, we propose enhancements to the EDGE model to
address these issues. Specifically, we introduce a degree-specific noise
schedule that optimizes the number of active nodes at each timestep,
significantly reducing memory consumption. Additionally, we present an improved
sampling scheme that fine-tunes the generative process, allowing for better
control over the similarity between the synthesized and the true network. Our
experimental results demonstrate that the proposed modifications not only
improve the efficiency but also enhance the accuracy of the generated graphs,
offering a robust and scalable solution for graph generation tasks.
Related papers
- Haste Makes Waste: A Simple Approach for Scaling Graph Neural Networks [37.41604955004456]
Graph neural networks (GNNs) have demonstrated remarkable success in graph representation learning.
Various sampling approaches have been proposed to scale GNNs to applications with large-scale graphs.
arXiv Detail & Related papers (2024-10-07T18:29:02Z) - Gradient Transformation: Towards Efficient and Model-Agnostic Unlearning for Dynamic Graph Neural Networks [66.70786325911124]
Graph unlearning has emerged as an essential tool for safeguarding user privacy and mitigating the negative impacts of undesirable data.
With the increasing prevalence of DGNNs, it becomes imperative to investigate the implementation of dynamic graph unlearning.
We propose an effective, efficient, model-agnostic, and post-processing method to implement DGNN unlearning.
arXiv Detail & Related papers (2024-05-23T10:26:18Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Efficient and Degree-Guided Graph Generation via Discrete Diffusion
Modeling [20.618785908770356]
Diffusion-based generative graph models have been proven effective in generating high-quality small graphs.
However, they need to be more scalable for generating large graphs containing thousands of nodes desiring graph statistics.
We propose EDGE, a new diffusion-based generative graph model that addresses generative tasks with large graphs.
arXiv Detail & Related papers (2023-05-06T18:32:27Z) - Deep Graph Reprogramming [112.34663053130073]
"Deep graph reprogramming" is a model reusing task tailored for graph neural networks (GNNs)
We propose an innovative Data Reprogramming paradigm alongside a Model Reprogramming paradigm.
arXiv Detail & Related papers (2023-04-28T02:04:29Z) - Learning Cooperative Beamforming with Edge-Update Empowered Graph Neural
Networks [29.23937571816269]
We propose an edge-graph-neural-network (Edge-GNN) to learn the cooperative beamforming on the graph edges.
The proposed Edge-GNN achieves higher sum rate with much shorter computation time than state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-23T02:05:06Z) - Spiking Variational Graph Auto-Encoders for Efficient Graph
Representation Learning [10.65760757021534]
We propose an SNN-based deep generative method, namely the Spiking Variational Graph Auto-Encoders (S-VGAE) for efficient graph representation learning.
We conduct link prediction experiments on multiple benchmark graph datasets, and the results demonstrate that our model consumes significantly lower energy with the performances superior or comparable to other ANN- and SNN-based methods for graph representation learning.
arXiv Detail & Related papers (2022-10-24T12:54:41Z) - Efficient Dynamic Graph Representation Learning at Scale [66.62859857734104]
We propose Efficient Dynamic Graph lEarning (EDGE), which selectively expresses certain temporal dependency via training loss to improve the parallelism in computations.
We show that EDGE can scale to dynamic graphs with millions of nodes and hundreds of millions of temporal events and achieve new state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2021-12-14T22:24:53Z) - Graph Neural Network based scheduling : Improved throughput under a
generalized interference model [3.911413922612859]
We propose a Graph Convolutional Neural Networks (GCN) based scheduling algorithm for adhoc networks.
A notable feature of this work is that the proposed method do not require labelled data set (NP-hard to compute) for training the neural network.
arXiv Detail & Related papers (2021-10-31T10:36:11Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Graph Ordering: Towards the Optimal by Learning [69.72656588714155]
Graph representation learning has achieved a remarkable success in many graph-based applications, such as node classification, prediction, and community detection.
However, for some kind of graph applications, such as graph compression and edge partition, it is very hard to reduce them to some graph representation learning tasks.
In this paper, we propose to attack the graph ordering problem behind such applications by a novel learning approach.
arXiv Detail & Related papers (2020-01-18T09:14:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.