Full Event Particle-Level Unfolding with Variable-Length Latent Variational Diffusion
- URL: http://arxiv.org/abs/2404.14332v2
- Date: Wed, 30 Oct 2024 14:39:15 GMT
- Title: Full Event Particle-Level Unfolding with Variable-Length Latent Variational Diffusion
- Authors: Alexander Shmakov, Kevin Greif, Michael James Fenton, Aishik Ghosh, Pierre Baldi, Daniel Whiteson,
- Abstract summary: generative machine learning models have shown promise for performing unbinned unfolding in a high number of dimensions.
A novel modification to the variational latent diffusion model (VLD) approach to generative unfolding is presented.
The performance of this method is evaluated in the context of semi-leptonic top quark pair production at the Large Hadron Collider.
- Score: 44.6263403467016
- License:
- Abstract: The measurements performed by particle physics experiments must account for the imperfect response of the detectors used to observe the interactions. One approach, unfolding, statistically adjusts the experimental data for detector effects. Recently, generative machine learning models have shown promise for performing unbinned unfolding in a high number of dimensions. However, all current generative approaches are limited to unfolding a fixed set of observables, making them unable to perform full-event unfolding in the variable dimensional environment of collider data. A novel modification to the variational latent diffusion model (VLD) approach to generative unfolding is presented, which allows for unfolding of high- and variable-dimensional feature spaces. The performance of this method is evaluated in the context of semi-leptonic top quark pair production at the Large Hadron Collider.
Related papers
- A Comprehensive Evaluation of Generative Models in Calorimeter Shower Simulation [0.0]
"Fast Simulation" has been pivotal in overcoming computational bottlenecks.
The use of deep-generative models has sparked a surge of interest in surrogate modeling for detector simulations.
Our evaluation revealed that the CaloDiffusion and CaloScore generative models demonstrate the most accurate simulation of particle showers.
arXiv Detail & Related papers (2024-06-08T11:17:28Z) - Variational Pseudo Marginal Methods for Jet Reconstruction in Particle Physics [2.223804777595989]
We introduce a Combinatorial Sequential Monte Carlo approach for inferring jet latent structures.
As a second contribution, we leverage the resulting estimator to develop a variational inference algorithm for parameter learning.
We illustrate our method's effectiveness through experiments using data generated with a collider physics generative model.
arXiv Detail & Related papers (2024-06-05T13:18:55Z) - End-To-End Latent Variational Diffusion Models for Inverse Problems in
High Energy Physics [61.44793171735013]
We introduce a novel unified architecture, termed latent variation models, which combines the latent learning of cutting-edge generative art approaches with an end-to-end variational framework.
Our unified approach achieves a distribution-free distance to the truth of over 20 times less than non-latent state-of-the-art baseline.
arXiv Detail & Related papers (2023-05-17T17:43:10Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - Interpretable Joint Event-Particle Reconstruction for Neutrino Physics
at NOvA with Sparse CNNs and Transformers [124.29621071934693]
We present a novel neural network architecture that combines the spatial learning enabled by convolutions with the contextual learning enabled by attention.
TransformerCVN simultaneously classifies each event and reconstructs every individual particle's identity.
This architecture enables us to perform several interpretability studies which provide insights into the network's predictions.
arXiv Detail & Related papers (2023-03-10T20:36:23Z) - DIFFormer: Scalable (Graph) Transformers Induced by Energy Constrained
Diffusion [66.21290235237808]
We introduce an energy constrained diffusion model which encodes a batch of instances from a dataset into evolutionary states.
We provide rigorous theory that implies closed-form optimal estimates for the pairwise diffusion strength among arbitrary instance pairs.
Experiments highlight the wide applicability of our model as a general-purpose encoder backbone with superior performance in various tasks.
arXiv Detail & Related papers (2023-01-23T15:18:54Z) - Selectively increasing the diversity of GAN-generated samples [8.980453507536017]
We propose a novel method to selectively increase the diversity of GAN-generated samples.
We show the superiority of our method in a synthetic benchmark as well as a real-life scenario simulating data from the Zero Degree Calorimeter of ALICE experiment in CERN.
arXiv Detail & Related papers (2022-07-04T16:27:06Z) - Learning to discover: expressive Gaussian mixture models for
multi-dimensional simulation and parameter inference in the physical sciences [0.0]
We show that density models describing multiple observables may be created using an auto-regressive Gaussian mixture model.
The model is designed to capture how observable spectra are deformed by hypothesis variations.
It may be used as a statistical model for scientific discovery in interpreting experimental observations.
arXiv Detail & Related papers (2021-08-25T21:27:29Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv Detail & Related papers (2021-02-12T12:23:13Z) - Embedded-physics machine learning for coarse-graining and collective
variable discovery without data [3.222802562733787]
We present a novel learning framework that consistently embeds underlying physics.
We propose a novel objective based on reverse Kullback-Leibler divergence that fully incorporates the available physics in the form of the atomistic force field.
We demonstrate the algorithmic advances in terms of predictive ability and the physical meaning of the revealed CVs for a bimodal potential energy function and the alanine dipeptide.
arXiv Detail & Related papers (2020-02-24T10:28:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.