Geometry of Score Based Generative Models
- URL: http://arxiv.org/abs/2302.04411v1
- Date: Thu, 9 Feb 2023 02:39:11 GMT
- Title: Geometry of Score Based Generative Models
- Authors: Sandesh Ghimire, Jinyang Liu, Armand Comas, Davin Hill, Aria Masoomi,
Octavia Camps, Jennifer Dy
- Abstract summary: We look at Score-based generative models (also called diffusion generative models) from a geometric perspective.
We prove that both the forward and backward process of adding noise and generating from noise are Wasserstein gradient flow in the space of probability measures.
- Score: 2.4078030278859113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we look at Score-based generative models (also called diffusion
generative models) from a geometric perspective. From a new view point, we
prove that both the forward and backward process of adding noise and generating
from noise are Wasserstein gradient flow in the space of probability measures.
We are the first to prove this connection. Our understanding of Score-based
(and Diffusion) generative models have matured and become more complete by
drawing ideas from different fields like Bayesian inference, control theory,
stochastic differential equation and Schrodinger bridge. However, many open
questions and challenges remain. One problem, for example, is how to decrease
the sampling time? We demonstrate that looking from geometric perspective
enables us to answer many of these questions and provide new interpretations to
some known results. Furthermore, geometric perspective enables us to devise an
intuitive geometric solution to the problem of faster sampling. By augmenting
traditional score-based generative models with a projection step, we show that
we can generate high quality images with significantly fewer sampling-steps.
Related papers
- Disentangled Representation Learning with the Gromov-Monge Gap [65.73194652234848]
Learning disentangled representations from unlabelled data is a fundamental challenge in machine learning.
We introduce a novel approach to disentangled representation learning based on quadratic optimal transport.
We demonstrate the effectiveness of our approach for quantifying disentanglement across four standard benchmarks.
arXiv Detail & Related papers (2024-07-10T16:51:32Z) - von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - SteinGen: Generating Fidelitous and Diverse Graph Samples [11.582357781579997]
We introduce SteinGen to generate graphs from only one observed graph.
We show that SteinGen yields high distributional similarity (high fidelity) to the original data, combined with high sample diversity.
arXiv Detail & Related papers (2024-03-27T13:59:05Z) - GECCO: Geometrically-Conditioned Point Diffusion Models [60.28388617034254]
Diffusion models generating images conditionally on text have recently made a splash far beyond the computer vision community.
Here, we tackle the related problem of generating point clouds, both unconditionally, and conditionally with images.
For the latter, we introduce a novel geometrically-motivated conditioning scheme based on projecting sparse image features into the point cloud.
arXiv Detail & Related papers (2023-03-10T13:45:44Z) - Deep Equilibrium Approaches to Diffusion Models [1.4275201654498746]
Diffusion-based generative models are extremely effective in generating high-quality images.
These models typically require long sampling chains to produce high-fidelity images.
We look at diffusion models through a different perspective, that of a (deep) equilibrium (DEQ) fixed point model.
arXiv Detail & Related papers (2022-10-23T22:02:19Z) - Unveiling the Sampling Density in Non-Uniform Geometric Graphs [69.93864101024639]
We consider graphs as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius.
In a social network communities can be modeled as densely sampled areas, and hubs as nodes with larger neighborhood radius.
We develop methods to estimate the unknown sampling density in a self-supervised fashion.
arXiv Detail & Related papers (2022-10-15T08:01:08Z) - Community Recovery in the Geometric Block Model [38.77098549680883]
We show that a simple triangle-counting dataset to detect communities in the geometric block model is near-optimal.
We also show that our algorithm performs extremely well, both theoretically and practically.
arXiv Detail & Related papers (2022-06-22T18:10:49Z) - Dynamic Dual-Output Diffusion Models [100.32273175423146]
Iterative denoising-based generation has been shown to be comparable in quality to other classes of generative models.
A major drawback of this method is that it requires hundreds of iterations to produce a competitive result.
Recent works have proposed solutions that allow for faster generation with fewer iterations, but the image quality gradually deteriorates.
arXiv Detail & Related papers (2022-03-08T11:20:40Z) - Finding Geometric Models by Clustering in the Consensus Space [61.65661010039768]
We propose a new algorithm for finding an unknown number of geometric models, e.g., homographies.
We present a number of applications where the use of multiple geometric models improves accuracy.
These include pose estimation from multiple generalized homographies; trajectory estimation of fast-moving objects.
arXiv Detail & Related papers (2021-03-25T14:35:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.