Explaining machine-learned particle-flow reconstruction
- URL: http://arxiv.org/abs/2111.12840v1
- Date: Wed, 24 Nov 2021 23:20:03 GMT
- Title: Explaining machine-learned particle-flow reconstruction
- Authors: Farouk Mokhtar, Raghav Kansal, Daniel Diaz, Javier Duarte, Joosep
Pata, Maurizio Pierini, Jean-Roch Vlimant
- Abstract summary: The particle-flow (PF) algorithm is used in general-purpose particle detectors to reconstruct a comprehensive particle-level view of the collision.
A graph neural network (GNN) model, known as the machine-learned particle-flow (MLPF) algorithm, has been developed to substitute the rule-based PF algorithm.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The particle-flow (PF) algorithm is used in general-purpose particle
detectors to reconstruct a comprehensive particle-level view of the collision
by combining information from different subdetectors. A graph neural network
(GNN) model, known as the machine-learned particle-flow (MLPF) algorithm, has
been developed to substitute the rule-based PF algorithm. However,
understanding the model's decision making is not straightforward, especially
given the complexity of the set-to-set prediction task, dynamic graph building,
and message-passing steps. In this paper, we adapt the layerwise-relevance
propagation technique for GNNs and apply it to the MLPF algorithm to gauge the
relevant nodes and features for its predictions. Through this process, we gain
insight into the model's decision-making.
Related papers
- Enhancing Data-Assimilation in CFD using Graph Neural Networks [0.0]
We present a novel machine learning approach for data assimilation applied in fluid mechanics, based on adjoint-optimization augmented by Graph Neural Networks (GNNs) models.
We obtain our results using direct numerical simulations based on a Finite Element Method (FEM) solver; a two-fold interface between the GNN model and the solver allows the GNN's predictions to be incorporated into post-processing steps of the FEM analysis.
arXiv Detail & Related papers (2023-11-29T19:11:40Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - Generalizing Backpropagation for Gradient-Based Interpretability [103.2998254573497]
We show that the gradient of a model is a special case of a more general formulation using semirings.
This observation allows us to generalize the backpropagation algorithm to efficiently compute other interpretable statistics.
arXiv Detail & Related papers (2023-07-06T15:19:53Z) - Progress towards an improved particle flow algorithm at CMS with machine
learning [8.3763093941108]
particle-flow (PF) is of central importance to event reconstruction in the CMS experiment at the CERN LHC.
In recent years, the machine learned particle-flow (MLPF) algorithm, a graph neural network that performs PF reconstruction, has been explored in CMS.
We discuss progress in CMS towards an improved implementation of the algorithmF reconstruction, now optimized using generator/simulation-level particle information.
This paves the way to potentially improving the detector response in terms of physical quantities of interest.
arXiv Detail & Related papers (2023-03-30T18:41:28Z) - Amortized Bayesian Inference of GISAXS Data with Normalizing Flows [0.10752246796855561]
We propose a simulation-based framework that combines variational auto-encoders and normalizing flows to estimate the posterior distribution of object parameters.
We demonstrate that our method reduces the inference cost by orders of magnitude while producing consistent results with ABC.
arXiv Detail & Related papers (2022-10-04T12:09:57Z) - Transformer with Implicit Edges for Particle-based Physics Simulation [135.77656965678196]
Transformer with Implicit Edges (TIE) captures the rich semantics of particle interactions in an edge-free manner.
We evaluate our model on diverse domains of varying complexity and materials.
arXiv Detail & Related papers (2022-07-22T03:45:29Z) - MLPF: Efficient machine-learned particle-flow reconstruction using graph
neural networks [0.0]
In general-purpose particle detectors, the particle-flow algorithm may be used to reconstruct a particle-level view of the event.
We introduce a novel, end-to-end trainable, machine-learned particle-flow algorithm based on parallelizable, scalable, and graph neural networks.
We report the physics and computational performance of the algorithm on a Monte Carlo dataset of top quark-antiquark pairs produced in proton-proton collisions.
arXiv Detail & Related papers (2021-01-21T12:47:54Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - A Lagrangian Approach to Information Propagation in Graph Neural
Networks [21.077268852378385]
In this paper, we propose a novel approach to the state computation and the learning algorithm for Graph Neural Network (GNN) models.
The state convergence procedure is implicitly expressed by the constraint satisfaction mechanism and does not require a separate iterative phase for each epoch of the learning procedure.
In fact, the computational structure is based on the search for saddle points of the Lagrangian in the adjoint space composed of weights, neural outputs (node states) and Lagrange multipliers.
arXiv Detail & Related papers (2020-02-18T16:13:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.