Exhaustive Neural Importance Sampling applied to Monte Carlo event
generation
- URL: http://arxiv.org/abs/2005.12719v2
- Date: Tue, 21 Jul 2020 06:26:06 GMT
- Title: Exhaustive Neural Importance Sampling applied to Monte Carlo event
generation
- Authors: Sebastian Pina-Otey, Federico S\'anchez, Thorsten Lux and Vicens
Gaitan
- Abstract summary: Exhaustive Neural Sampling (ENIS) is a method based on normalizing flows to find a suitable proposal density for rejection sampling automatically and efficiently.
We discuss how this technique solves common issues of the rejection algorithm.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The generation of accurate neutrino-nucleus cross-section models needed for
neutrino oscillation experiments require simultaneously the description of many
degrees of freedom and precise calculations to model nuclear responses. The
detailed calculation of complete models makes the Monte Carlo generators slow
and impractical. We present Exhaustive Neural Importance Sampling (ENIS), a
method based on normalizing flows to find a suitable proposal density for
rejection sampling automatically and efficiently, and discuss how this
technique solves common issues of the rejection algorithm.
Related papers
- Simulation-based inference for Precision Neutrino Physics through Neural Monte Carlo tuning [0.0]
We propose a solution using neural likelihood estimation within the simulation-based inference framework.<n>We develop two complementary neural density estimators that model likelihoods of calibration data.<n>Our framework offers flexibility to choose the most appropriate method for specific needs.
arXiv Detail & Related papers (2025-07-31T07:33:05Z) - Numerical Generalized Randomized Hamiltonian Monte Carlo for piecewise smooth target densities [0.0]
Generalized Hamiltonian Monte Carlo processes for sampling continuous densities with discontinuous gradient and piecewise smooth targets are proposed.
It is argued that the techniques lead to GRHMC processes that admit the desired target distribution as the invariant distribution in both scenarios.
arXiv Detail & Related papers (2025-04-25T09:41:57Z) - Neural Network Approach to Stochastic Dynamics for Smooth Multimodal Density Estimation [0.0]
We extent Metropolis-Adjusted Langevin Diffusion algorithm by modelling the Eigenity of precondition matrix as a random matrix.
The proposed method provides fully adaptation mechanisms to tune proposal densities to exploits and adapts the geometry of local structures of statistical models.
arXiv Detail & Related papers (2025-03-22T16:17:12Z) - Neural Flow Samplers with Shortcut Models [19.81513273510523]
Continuous flow-based neural samplers offer a promising approach to generate samples from unnormalized densities.<n>We introduce an improved estimator for these challenging quantities, employing a velocity-driven Sequential Monte Carlo method.<n>Our proposed Neural Flow Shortcut Sampler empirically outperforms existing flow-based neural samplers on both synthetic datasets and complex n-body system targets.
arXiv Detail & Related papers (2025-02-11T07:55:41Z) - Machine learning-enabled velocity model building with uncertainty quantification [0.41942958779358674]
Accurately characterizing migration velocity models is crucial for a wide range of geophysical applications.
Traditional velocity model building methods are powerful but often struggle with the inherent complexities of the inverse problem.
We propose a scalable methodology that integrates generative modeling, in the form of Diffusion networks, with physics-informed summary statistics.
arXiv Detail & Related papers (2024-11-11T01:36:48Z) - Variational Tensor Network Simulation of Gaussian Boson Sampling and Beyond [0.0]
We propose a classical simulation tool for general continuous variable sampling problems.
We reformulate the sampling problem as that of finding the ground state of a simple few-body Hamiltonian.
We validate our method by simulating Gaussian Boson Sampling, where we achieve results comparable to the state of the art.
arXiv Detail & Related papers (2024-10-24T13:43:06Z) - Conditional Lagrangian Wasserstein Flow for Time Series Imputation [3.914746375834628]
We propose a novel method for time series imputation called Conditional Lagrangian Wasserstein Flow.
The proposed method leverages the (conditional) optimal transport theory to learn the probability flow in a simulation-free manner.
The experimental results on the real-word datasets show that the proposed method achieves competitive performance on time series imputation.
arXiv Detail & Related papers (2024-10-10T02:46:28Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Iterated Denoising Energy Matching for Sampling from Boltzmann Densities [109.23137009609519]
Iterated Denoising Energy Matching (iDEM)
iDEM alternates between (I) sampling regions of high model density from a diffusion-based sampler and (II) using these samples in our matching objective.
We show that the proposed approach achieves state-of-the-art performance on all metrics and trains $2-5times$ faster.
arXiv Detail & Related papers (2024-02-09T01:11:23Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Efficient Propagation of Uncertainty via Reordering Monte Carlo Samples [0.7087237546722617]
Uncertainty propagation is a technique to determine model output uncertainties based on the uncertainty in its input variables.
In this work, we investigate the hypothesis that while all samples are useful on average, some samples must be more useful than others.
We introduce a methodology to adaptively reorder MC samples and show how it results in reduction of computational expense of UP processes.
arXiv Detail & Related papers (2023-02-09T21:28:15Z) - Importance sampling for stochastic quantum simulations [68.8204255655161]
We introduce the qDrift protocol, which builds random product formulas by sampling from the Hamiltonian according to the coefficients.
We show that the simulation cost can be reduced while achieving the same accuracy, by considering the individual simulation cost during the sampling stage.
Results are confirmed by numerical simulations performed on a lattice nuclear effective field theory.
arXiv Detail & Related papers (2022-12-12T15:06:32Z) - Flow-based sampling in the lattice Schwinger model at criticality [54.48885403692739]
Flow-based algorithms may provide efficient sampling of field distributions for lattice field theory applications.
We provide a numerical demonstration of robust flow-based sampling in the Schwinger model at the critical value of the fermion mass.
arXiv Detail & Related papers (2022-02-23T19:00:00Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Simulating the Time Projection Chamber responses at the MPD detector
using Generative Adversarial Networks [0.0]
In this work, we demonstrate a novel approach to speed up the simulation of the Time Projection Chamber tracker of the MPD experiment at the NICA accelerator complex.
Our method is based on a Generative Adrial Network - a deep learning technique allowing for implicit non-parametric estimation of the population distribution for a given set of objects.
To evaluate the quality of the proposed model, we integrate it into the MPD software stack and demonstrate that it produces high-quality events similar to the detailed simulator.
arXiv Detail & Related papers (2020-12-08T17:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.