Latent Space Diffusion Models of Cryo-EM Structures
- URL: http://arxiv.org/abs/2211.14169v1
- Date: Fri, 25 Nov 2022 15:17:10 GMT
- Title: Latent Space Diffusion Models of Cryo-EM Structures
- Authors: Karsten Kreis, Tim Dockhorn, Zihao Li, Ellen Zhong
- Abstract summary: We train a diffusion model as an expressive, learnable prior in the cryoDRGN framework.
By learning an accurate model of the data distribution, our method unlocks tools in generative modeling, sampling, and distribution analysis.
- Score: 6.968705314671148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cryo-electron microscopy (cryo-EM) is unique among tools in structural
biology in its ability to image large, dynamic protein complexes. Key to this
ability is image processing algorithms for heterogeneous cryo-EM
reconstruction, including recent deep learning-based approaches. The
state-of-the-art method cryoDRGN uses a Variational Autoencoder (VAE) framework
to learn a continuous distribution of protein structures from single particle
cryo-EM imaging data. While cryoDRGN can model complex structural motions, the
Gaussian prior distribution of the VAE fails to match the aggregate approximate
posterior, which prevents generative sampling of structures especially for
multi-modal distributions (e.g. compositional heterogeneity). Here, we train a
diffusion model as an expressive, learnable prior in the cryoDRGN framework.
Our approach learns a high-quality generative model over molecular
conformations directly from cryo-EM imaging data. We show the ability to sample
from the model on two synthetic and two real datasets, where samples accurately
follow the data distribution unlike samples from the VAE prior distribution. We
also demonstrate how the diffusion model prior can be leveraged for fast latent
space traversal and interpolation between states of interest. By learning an
accurate model of the data distribution, our method unlocks tools in generative
modeling, sampling, and distribution analysis for heterogeneous cryo-EM
ensembles.
Related papers
- Diffusion Models Learn Low-Dimensional Distributions via Subspace Clustering [15.326641037243006]
diffusion models can effectively learn the image distribution and generate new samples.
We provide theoretical insights into this phenomenon by leveraging key empirical observations.
We show that the minimal number of samples required to learn the underlying distribution scales linearly with the intrinsic dimensions.
arXiv Detail & Related papers (2024-09-04T04:14:02Z) - Deep Generative Sampling in the Dual Divergence Space: A Data-efficient & Interpretative Approach for Generative AI [29.13807697733638]
We build on the remarkable achievements in generative sampling of natural images.
We propose an innovative challenge, potentially overly ambitious, which involves generating samples that resemble images.
The statistical challenge lies in the small sample size, sometimes consisting of a few hundred subjects.
arXiv Detail & Related papers (2024-04-10T22:35:06Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - TC-DiffRecon: Texture coordination MRI reconstruction method based on
diffusion model and modified MF-UNet method [2.626378252978696]
We propose a novel diffusion model-based MRI reconstruction method, named TC-DiffRecon, which does not rely on a specific acceleration factor for training.
We also suggest the incorporation of the MF-UNet module, designed to enhance the quality of MRI images generated by the model.
arXiv Detail & Related papers (2024-02-17T13:09:00Z) - Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - Convergence Analysis of Discrete Diffusion Model: Exact Implementation
through Uniformization [17.535229185525353]
We introduce an algorithm leveraging the uniformization of continuous Markov chains, implementing transitions on random time points.
Our results align with state-of-the-art achievements for diffusion models in $mathbbRd$ and further underscore the advantages of discrete diffusion models in comparison to the $mathbbRd$ setting.
arXiv Detail & Related papers (2024-02-12T22:26:52Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance [95.12230117950232]
We show that a common latent space emerges from two diffusion models trained independently on related domains.
Applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors.
arXiv Detail & Related papers (2022-10-11T15:53:52Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - GeoDiff: a Geometric Diffusion Model for Molecular Conformation
Generation [102.85440102147267]
We propose a novel generative model named GeoDiff for molecular conformation prediction.
We show that GeoDiff is superior or comparable to existing state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-06T09:47:01Z) - Structured Denoising Diffusion Models in Discrete State-Spaces [15.488176444698404]
We introduce Discrete Denoising Diffusion Probabilistic Models (D3PMs) for discrete data.
The choice of transition matrix is an important design decision that leads to improved results in image and text domains.
For text, this model class achieves strong results on character-level text generation while scaling to large vocabularies on LM1B.
arXiv Detail & Related papers (2021-07-07T04:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.