Scaffolding Simulations with Deep Learning for High-dimensional
Deconvolution
- URL: http://arxiv.org/abs/2105.04448v1
- Date: Mon, 10 May 2021 15:16:18 GMT
- Title: Scaffolding Simulations with Deep Learning for High-dimensional
Deconvolution
- Authors: Anders Andreassen, Patrick T. Komiske, Eric M. Metodiev, Benjamin
Nachman, Adi Suresh, and Jesse Thaler
- Abstract summary: We propose a simulation-based maximum likelihood deconvolution approach in this setting called OmniFold.
Deep learning enables this approach to be naturally unbinned and (variable-, and) high-dimensional.
We show how OmniFold can not only remove detector distortions, but it can also account for noise processes and acceptance effects.
- Score: 0.3078691410268859
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A common setting for scientific inference is the ability to sample from a
high-fidelity forward model (simulation) without having an explicit probability
density of the data. We propose a simulation-based maximum likelihood
deconvolution approach in this setting called OmniFold. Deep learning enables
this approach to be naturally unbinned and (variable-, and) high-dimensional.
In contrast to model parameter estimation, the goal of deconvolution is to
remove detector distortions in order to enable a variety of down-stream
inference tasks. Our approach is the deep learning generalization of the common
Richardson-Lucy approach that is also called Iterative Bayesian Unfolding in
particle physics. We show how OmniFold can not only remove detector
distortions, but it can also account for noise processes and acceptance
effects.
Related papers
- Provable Maximum Entropy Manifold Exploration via Diffusion Models [58.89696361871563]
Exploration is critical for solving real-world decision-making problems such as scientific discovery.<n>We introduce a novel framework that casts exploration as entropy over approximate data manifold implicitly defined by a pre-trained diffusion model.<n>We develop an algorithm based on mirror descent that solves the exploration problem as sequential fine-tuning of a pre-trained diffusion model.
arXiv Detail & Related papers (2025-06-18T11:59:15Z) - Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Multidimensional Deconvolution with Profiling [0.28587848809639416]
In many experimental contexts, it is necessary to statistically remove the impact of instrumental effects in order to physically interpret measurements.
We propose a new algorithm called Profile OmniFold (POF), which works in a similar iterative manner as the OmniFold (OF) algorithm while being able to simultaneously profile the nuisance parameters.
arXiv Detail & Related papers (2024-09-16T15:52:28Z) - Parallel and Limited Data Voice Conversion Using Stochastic Variational
Deep Kernel Learning [2.5782420501870296]
This paper proposes a voice conversion method that works with limited data.
It is based on variational deep kernel learning (SVDKL)
It is possible to estimate non-smooth and more complex functions.
arXiv Detail & Related papers (2023-09-08T16:32:47Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Learning to Simulate Tree-Branch Dynamics for Manipulation [26.808346972775368]
We propose to use a simulation driven inverse inference approach to model the dynamics of tree branches under manipulation.
We show that our model can predict deformation trajectories, quantify the estimation uncertainty, and it can perform better when base-lined against other inference algorithms.
arXiv Detail & Related papers (2023-06-06T05:17:02Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Likelihood-Free Inference in State-Space Models with Unknown Dynamics [71.94716503075645]
We introduce a method for inferring and predicting latent states in state-space models where observations can only be simulated, and transition dynamics are unknown.
We propose a way of doing likelihood-free inference (LFI) of states and state prediction with a limited number of simulations.
arXiv Detail & Related papers (2021-11-02T12:33:42Z) - Arbitrary Marginal Neural Ratio Estimation for Simulation-based
Inference [7.888755225607877]
We present a novel method that enables amortized inference over arbitrary subsets of the parameters, without resorting to numerical integration.
We demonstrate the applicability of the method on parameter inference of binary black hole systems from gravitational waves observations.
arXiv Detail & Related papers (2021-10-01T14:35:46Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Deep Magnification-Flexible Upsampling over 3D Point Clouds [103.09504572409449]
We propose a novel end-to-end learning-based framework to generate dense point clouds.
We first formulate the problem explicitly, which boils down to determining the weights and high-order approximation errors.
Then, we design a lightweight neural network to adaptively learn unified and sorted weights as well as the high-order refinements.
arXiv Detail & Related papers (2020-11-25T14:00:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.