Advancing Physics Data Analysis through Machine Learning and Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2410.14760v1
- Date: Fri, 18 Oct 2024 11:05:52 GMT
- Title: Advancing Physics Data Analysis through Machine Learning and Physics-Informed Neural Networks
- Authors: Vasileios Vatellis,
- Abstract summary: This project evaluates various machine learning (ML) algorithms for physics data analysis.
We apply these techniques to a binary classification task that distinguishes the experimental viability of simulated scenarios.
XGBoost emerged as the preferred choice among the evaluated machine learning algorithms for its speed and effectiveness.
- Score: 0.0
- License:
- Abstract: In an era increasingly focused on green computing and explainable AI, revisiting traditional approaches in theoretical and phenomenological particle physics is paramount. This project evaluates various machine learning (ML) algorithms-including Nearest Neighbors, Decision Trees, Random Forest, AdaBoost, Naive Bayes, Quadratic Discriminant Analysis (QDA), and XGBoost-alongside standard neural networks and a novel Physics-Informed Neural Network (PINN) for physics data analysis. We apply these techniques to a binary classification task that distinguishes the experimental viability of simulated scenarios based on Higgs observables and essential parameters. Through this comprehensive analysis, we aim to showcase the capabilities and computational efficiency of each model in binary classification tasks, thereby contributing to the ongoing discourse on integrating ML and Deep Neural Networks (DNNs) into physics research. In this study, XGBoost emerged as the preferred choice among the evaluated machine learning algorithms for its speed and effectiveness, especially in the initial stages of computation with limited datasets. However, while standard Neural Networks and Physics-Informed Neural Networks (PINNs) demonstrated superior performance in terms of accuracy and adherence to physical laws, they require more computational time. These findings underscore the trade-offs between computational efficiency and model sophistication.
Related papers
- Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - A Comparison Between Invariant and Equivariant Classical and Quantum Graph Neural Networks [3.350407101925898]
Deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics.
One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles.
In this paper, we perform a fair and comprehensive comparison between classical graph neural networks (GNNs) and their quantum counterparts.
arXiv Detail & Related papers (2023-11-30T16:19:13Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Physics-Guided, Physics-Informed, and Physics-Encoded Neural Networks in
Scientific Computing [0.0]
Recent breakthroughs in computing power have made it feasible to use machine learning and deep learning to advance scientific computing.
Due to their intrinsic architecture, conventional neural networks cannot be successfully trained and scoped when data is sparse.
Neural networks offer a strong foundation to digest physical-driven or knowledge-based constraints.
arXiv Detail & Related papers (2022-11-14T15:44:07Z) - Physics-informed ConvNet: Learning Physical Field from a Shallow Neural
Network [0.180476943513092]
Modelling and forecasting multi-physical systems remain a challenge due to unavoidable data scarcity and noise.
New framework named physics-informed convolutional network (PICN) is recommended from a CNN perspective.
PICN may become an alternative neural network solver in physics-informed machine learning.
arXiv Detail & Related papers (2022-01-26T14:35:58Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Understanding and mitigating gradient pathologies in physics-informed
neural networks [2.1485350418225244]
This work focuses on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data.
We present a learning rate annealing algorithm that utilizes gradient statistics during model training to balance the interplay between different terms in composite loss functions.
We also propose a novel neural network architecture that is more resilient to such gradient pathologies.
arXiv Detail & Related papers (2020-01-13T21:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.