Sequence-based Machine Learning Models in Jet Physics
- URL: http://arxiv.org/abs/2102.06128v1
- Date: Tue, 9 Feb 2021 16:04:33 GMT
- Title: Sequence-based Machine Learning Models in Jet Physics
- Authors: Rafael Teixeira de Lima
- Abstract summary: Sequence-based modeling broadly refers to algorithms that act on data that is represented as an ordered set of input elements.
In particular, Machine Learning algorithms with sequences as inputs have seen successfull applications to important problems, such as Natural Language Processing (NLP) and speech signal modeling.
We explore the application of Recurrent Neural Networks (RNNs) and other sequence-based neural network architectures to classify jets, regress jet-related quantities and to build a physics-inspired jet representation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sequence-based modeling broadly refers to algorithms that act on data that is
represented as an ordered set of input elements. In particular, Machine
Learning algorithms with sequences as inputs have seen successfull applications
to important problems, such as Natural Language Processing (NLP) and speech
signal modeling. The usage this class of models in collider physics leverages
their ability to act on data with variable sequence lengths, such as
constituents inside a jet. In this document, we explore the application of
Recurrent Neural Networks (RNNs) and other sequence-based neural network
architectures to classify jets, regress jet-related quantities and to build a
physics-inspired jet representation, in connection to jet clustering
algorithms. In addition, alternatives to sequential data representations are
briefly discussed.
Related papers
- Neural Modelling of Dynamic Systems with Time Delays Based on an
Adjusted NEAT Algorithm [0.0]
The proposed algorithm is based on a well-known NeuroEvolution of Augmenting Topologies (NEAT) algorithm.
The research involved an extended validation study based on data generated from a mathematical model of an exemplary system.
The obtaining simulation results demonstrate the high effectiveness of the devised neural (black-box) models of dynamic systems with time delays.
arXiv Detail & Related papers (2023-09-21T15:04:42Z) - Custom DNN using Reward Modulated Inverted STDP Learning for Temporal
Pattern Recognition [0.0]
Temporal spike recognition plays a crucial role in various domains, including anomaly detection, keyword spotting and neuroscience.
This paper presents a novel algorithm for efficient temporal spike pattern recognition on sparse event series data.
arXiv Detail & Related papers (2023-07-15T18:57:27Z) - Compositional Learning of Dynamical System Models Using Port-Hamiltonian
Neural Networks [32.707730631343416]
We present a framework for learning composite models of dynamical systems from data.
neural network submodels are trained on trajectory data generated by relatively simple subsystems.
We demonstrate the novel capabilities of the proposed framework through numerical examples.
arXiv Detail & Related papers (2022-12-01T22:22:38Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - Explaining machine-learned particle-flow reconstruction [0.0]
The particle-flow (PF) algorithm is used in general-purpose particle detectors to reconstruct a comprehensive particle-level view of the collision.
A graph neural network (GNN) model, known as the machine-learned particle-flow (MLPF) algorithm, has been developed to substitute the rule-based PF algorithm.
arXiv Detail & Related papers (2021-11-24T23:20:03Z) - Convolutional generative adversarial imputation networks for
spatio-temporal missing data in storm surge simulations [86.5302150777089]
Generative Adversarial Imputation Nets (GANs) and GAN-based techniques have attracted attention as unsupervised machine learning methods.
We name our proposed method as Con Conval Generative Adversarial Imputation Nets (Conv-GAIN)
arXiv Detail & Related papers (2021-11-03T03:50:48Z) - Conditionally Parameterized, Discretization-Aware Neural Networks for
Mesh-Based Modeling of Physical Systems [0.0]
We generalize the idea of conditional parametrization -- using trainable functions of input parameters.
We show that conditionally parameterized networks provide superior performance compared to their traditional counterparts.
A network architecture named CP-GNet is also proposed as the first deep learning model capable of reacting standalone prediction of flows on meshes.
arXiv Detail & Related papers (2021-09-15T20:21:13Z) - Rank-R FNN: A Tensor-Based Learning Model for High-Order Data
Classification [69.26747803963907]
Rank-R Feedforward Neural Network (FNN) is a tensor-based nonlinear learning model that imposes Canonical/Polyadic decomposition on its parameters.
First, it handles inputs as multilinear arrays, bypassing the need for vectorization, and can thus fully exploit the structural information along every data dimension.
We establish the universal approximation and learnability properties of Rank-R FNN, and we validate its performance on real-world hyperspectral datasets.
arXiv Detail & Related papers (2021-04-11T16:37:32Z) - Connecting Weighted Automata, Tensor Networks and Recurrent Neural
Networks through Spectral Learning [58.14930566993063]
We present connections between three models used in different research fields: weighted finite automata(WFA) from formal languages and linguistics, recurrent neural networks used in machine learning, and tensor networks.
We introduce the first provable learning algorithm for linear 2-RNN defined over sequences of continuous vectors input.
arXiv Detail & Related papers (2020-10-19T15:28:00Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.