Exploring Molecule Generation Using Latent Space Graph Diffusion
- URL: http://arxiv.org/abs/2501.03696v1
- Date: Tue, 07 Jan 2025 10:54:44 GMT
- Title: Exploring Molecule Generation Using Latent Space Graph Diffusion
- Authors: Prashanth Pombala, Gerrit Grossmann, Verena Wolf,
- Abstract summary: Generating molecular graphs is a challenging task due to their discrete nature and the competitive objectives involved.
For molecular graphs, graph neural networks (GNNs) as a diffusion backbone have achieved impressive results.
Latent space diffusion, where diffusion occurs in a low-dimensional space via an autoencoder, has demonstrated computational efficiency.
- Score: 0.0
- License:
- Abstract: Generating molecular graphs is a challenging task due to their discrete nature and the competitive objectives involved. Diffusion models have emerged as SOTA approaches in data generation across various modalities. For molecular graphs, graph neural networks (GNNs) as a diffusion backbone have achieved impressive results. Latent space diffusion, where diffusion occurs in a low-dimensional space via an autoencoder, has demonstrated computational efficiency. However, the literature on latent space diffusion for molecular graphs is scarce, and no commonly accepted best practices exist. In this work, we explore different approaches and hyperparameters, contrasting generative flow models (denoising diffusion, flow matching, heat dissipation) and architectures (GNNs and E(3)-equivariant GNNs). Our experiments reveal a high sensitivity to the choice of approach and design decisions. Code is made available at github.com/Prashanth-Pombala/Molecule-Generation-using-Latent-Space-Graph-Diffusion.
Related papers
- Bridging the Gap between Learning and Inference for Diffusion-Based Molecule Generation [18.936142688346816]
GapDiff is a training framework that mitigates the data distributional disparity between training and inference.
We conduct experiments using a 3D molecular generation model on the CrossDocked 2020 dataset.
arXiv Detail & Related papers (2024-11-08T10:53:39Z) - Derivative-Free Guidance in Continuous and Discrete Diffusion Models with Soft Value-Based Decoding [84.3224556294803]
Diffusion models excel at capturing the natural design spaces of images, molecules, DNA, RNA, and protein sequences.
We aim to optimize downstream reward functions while preserving the naturalness of these design spaces.
Our algorithm integrates soft value functions, which looks ahead to how intermediate noisy states lead to high rewards in the future.
arXiv Detail & Related papers (2024-08-15T16:47:59Z) - Hyperbolic Geometric Latent Diffusion Model for Graph Generation [27.567428462212455]
Diffusion models have made significant contributions to computer vision, sparking a growing interest in the community recently regarding the application of them to graph generation.
In this paper, we propose a novel geometrically latent diffusion framework HypDiff.
Specifically, we first establish a geometrically latent space with interpretability measures based on hyperbolic geometry, to define anisotropic latent diffusion processes for graphs.
Then, we propose a geometrically latent diffusion process that is constrained by both radial and angular geometric properties, thereby ensuring the preservation of the original topological properties in the generative graphs.
arXiv Detail & Related papers (2024-05-06T06:28:44Z) - Diffusion-based Graph Generative Methods [51.04666253001781]
We systematically and comprehensively review on diffusion-based graph generative methods.
We first make a review on three mainstream paradigms of diffusion methods, which are denoising diffusion models, score-based genrative models, and differential equations.
In the end, we point out some limitations of current studies and future directions of future explorations.
arXiv Detail & Related papers (2024-01-28T10:09:05Z) - Advective Diffusion Transformers for Topological Generalization in Graph
Learning [69.2894350228753]
We show how graph diffusion equations extrapolate and generalize in the presence of varying graph topologies.
We propose a novel graph encoder backbone, Advective Diffusion Transformer (ADiT), inspired by advective graph diffusion equations.
arXiv Detail & Related papers (2023-10-10T08:40:47Z) - Gramian Angular Fields for leveraging pretrained computer vision models
with anomalous diffusion trajectories [0.9012198585960443]
We present a new data-driven method for working with diffusive trajectories.
This method utilizes Gramian Angular Fields (GAF) to encode one-dimensional trajectories as images.
We leverage two well-established pre-trained computer-vision models, ResNet and MobileNet, to characterize the underlying diffusive regime.
arXiv Detail & Related papers (2023-09-02T17:22:45Z) - Generative Diffusion Models on Graphs: Methods and Applications [50.44334458963234]
Diffusion models, as a novel generative paradigm, have achieved remarkable success in various image generation tasks.
Graph generation is a crucial computational task on graphs with numerous real-world applications.
arXiv Detail & Related papers (2023-02-06T06:58:17Z) - Conditional Diffusion Based on Discrete Graph Structures for Molecular
Graph Generation [32.66694406638287]
We propose a Conditional Diffusion model based on discrete Graph Structures (CDGS) for molecular graph generation.
Specifically, we construct a forward graph diffusion process on both graph structures and inherent features through differential equations (SDE)
We present a specialized hybrid graph noise prediction model that extracts the global context and the local node-edge dependency from intermediate graph states.
arXiv Detail & Related papers (2023-01-01T15:24:15Z) - Fast Graph Generative Model via Spectral Diffusion [38.31052833073743]
We argue that running full-rank diffusion SDEs on the whole space hinders diffusion models from learning graph topology generation.
We propose an efficient yet effective Graph Spectral Diffusion Model (GSDM), which is driven by low-rank diffusion SDEs on the graph spectrum space.
arXiv Detail & Related papers (2022-11-16T12:56:32Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.