Improving Generative Model-based Unfolding with Schr\"{o}dinger Bridges
- URL: http://arxiv.org/abs/2308.12351v2
- Date: Fri, 22 Sep 2023 17:28:21 GMT
- Title: Improving Generative Model-based Unfolding with Schr\"{o}dinger Bridges
- Authors: Sascha Diefenbacher, Guan-Horng Liu, Vinicius Mikuni, Benjamin
Nachman, and Weili Nie
- Abstract summary: Machine learning-based unfolding has enabled unbinned and high-dimensional differential cross section measurements.
We propose to use Schroedinger Bridges and diffusion models to create SBUnfold, an unfolding approach that combines the strengths of both discriminative and generative models.
We show that SBUnfold achieves excellent performance compared to state of the art methods on a synthetic Z+jets dataset.
- Score: 14.989614554242229
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning-based unfolding has enabled unbinned and high-dimensional
differential cross section measurements. Two main approaches have emerged in
this research area: one based on discriminative models and one based on
generative models. The main advantage of discriminative models is that they
learn a small correction to a starting simulation while generative models scale
better to regions of phase space with little data. We propose to use
Schroedinger Bridges and diffusion models to create SBUnfold, an unfolding
approach that combines the strengths of both discriminative and generative
models. The key feature of SBUnfold is that its generative model maps one set
of events into another without having to go through a known probability density
as is the case for normalizing flows and standard diffusion models. We show
that SBUnfold achieves excellent performance compared to state of the art
methods on a synthetic Z+jets dataset.
Related papers
- Continuous Diffusion Model for Language Modeling [57.396578974401734]
Existing continuous diffusion models for discrete data have limited performance compared to discrete approaches.
We propose a continuous diffusion model for language modeling that incorporates the geometry of the underlying categorical distribution.
arXiv Detail & Related papers (2025-02-17T08:54:29Z) - Generative diffusion model with inverse renormalization group flows [0.0]
Diffusion models produce data by denoising a sample corrupted by white noise.
We introduce a renormalization group-based diffusion model that leverages multiscale nature of data distributions.
We validate the versatility of the model through applications to protein structure prediction and image generation.
arXiv Detail & Related papers (2025-01-15T19:00:01Z) - Accelerated Diffusion Models via Speculative Sampling [89.43940130493233]
Speculative sampling is a popular technique for accelerating inference in Large Language Models.
We extend speculative sampling to diffusion models, which generate samples via continuous, vector-valued Markov chains.
We propose various drafting strategies, including a simple and effective approach that does not require training a draft model.
arXiv Detail & Related papers (2025-01-09T16:50:16Z) - Schödinger Bridge Type Diffusion Models as an Extension of Variational Autoencoders [0.4499833362998489]
We propose a unified framework to construct diffusion models by reinterpreting the SB-type models as an extension of variational autoencoders.
We find that the objective function consists of the prior loss and drift matching parts.
arXiv Detail & Related papers (2024-12-24T07:43:14Z) - Generative Unfolding with Distribution Mapping [0.0837622912636323]
We show how to extend two morphing techniques, Schr"odinger Bridges and Direct Diffusion, in order to ensure that the models learn the correct conditional probabilities.
Results are presented with a standard benchmark dataset of single jet substructure as well as for a new dataset describing a 22-dimensional phase space of Z + 2-jets.
arXiv Detail & Related papers (2024-11-04T19:00:01Z) - Bridging Model-Based Optimization and Generative Modeling via Conservative Fine-Tuning of Diffusion Models [54.132297393662654]
We introduce a hybrid method that fine-tunes cutting-edge diffusion models by optimizing reward models through RL.
We demonstrate the capability of our approach to outperform the best designs in offline data, leveraging the extrapolation capabilities of reward models.
arXiv Detail & Related papers (2024-05-30T03:57:29Z) - Discrete Diffusion Modeling by Estimating the Ratios of the Data Distribution [67.9215891673174]
We propose score entropy as a novel loss that naturally extends score matching to discrete spaces.
We test our Score Entropy Discrete Diffusion models on standard language modeling tasks.
arXiv Detail & Related papers (2023-10-25T17:59:12Z) - On Distillation of Guided Diffusion Models [94.95228078141626]
We propose an approach to distilling classifier-free guided diffusion models into models that are fast to sample from.
For standard diffusion models trained on the pixelspace, our approach is able to generate images visually comparable to that of the original model.
For diffusion models trained on the latent-space (e.g., Stable Diffusion), our approach is able to generate high-fidelity images using as few as 1 to 4 denoising steps.
arXiv Detail & Related papers (2022-10-06T18:03:56Z) - A Survey on Generative Diffusion Model [75.93774014861978]
Diffusion models are an emerging class of deep generative models.
They have certain limitations, including a time-consuming iterative generation process and confinement to high-dimensional Euclidean space.
This survey presents a plethora of advanced techniques aimed at enhancing diffusion models.
arXiv Detail & Related papers (2022-09-06T16:56:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.