Regression-based Physics Informed Neural Networks (Reg-PINNs) for
Magnetopause Tracking
- URL: http://arxiv.org/abs/2306.09621v3
- Date: Fri, 23 Jun 2023 04:33:05 GMT
- Title: Regression-based Physics Informed Neural Networks (Reg-PINNs) for
Magnetopause Tracking
- Authors: Po-Han Hou and Jih-Hong Shue
- Abstract summary: We propose a Regression-based Physics-Informed Neural Networks (Reg-PINNs) that combines physics-based numerical computation with vanilla machine learning.
Compared to Shue et al. [1998], our model achieves a reduction of approximately 30% in root mean square error.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ultimate goal of studying the magnetopause position is to accurately
determine its location. Both traditional empirical computation methods and the
currently popular machine learning approaches have shown promising results. In
this study, we propose a Regression-based Physics-Informed Neural Networks
(Reg-PINNs) that combines physics-based numerical computation with vanilla
machine learning. This new generation of Physics Informed Neural Networks
overcomes the limitations of previous methods restricted to solving ordinary
and partial differential equations by incorporating conventional empirical
models to aid the convergence and enhance the generalization capability of the
neural network. Compared to Shue et al. [1998], our model achieves a reduction
of approximately 30% in root mean square error. The methodology presented in
this study is not only applicable to space research but can also be referenced
in studies across various fields, particularly those involving empirical
models.
Related papers
- Advancing Physics Data Analysis through Machine Learning and Physics-Informed Neural Networks [0.0]
This project evaluates various machine learning (ML) algorithms for physics data analysis.
We apply these techniques to a binary classification task that distinguishes the experimental viability of simulated scenarios.
XGBoost emerged as the preferred choice among the evaluated machine learning algorithms for its speed and effectiveness.
arXiv Detail & Related papers (2024-10-18T11:05:52Z) - A singular Riemannian Geometry Approach to Deep Neural Networks III. Piecewise Differentiable Layers and Random Walks on $n$-dimensional Classes [49.32130498861987]
We study the case of non-differentiable activation functions, such as ReLU.
Two recent works introduced a geometric framework to study neural networks.
We illustrate our findings with some numerical experiments on classification of images and thermodynamic problems.
arXiv Detail & Related papers (2024-04-09T08:11:46Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - A Survey on Statistical Theory of Deep Learning: Approximation, Training Dynamics, and Generative Models [13.283281356356161]
We review the literature on statistical theories of neural networks from three perspectives.
Results on excess risks for neural networks are reviewed.
Papers that attempt to answer how the neural network finds the solution that can generalize well on unseen data'' are reviewed.
arXiv Detail & Related papers (2024-01-14T02:30:19Z) - Approximating Numerical Fluxes Using Fourier Neural Operators for Hyperbolic Conservation Laws [7.438389089520601]
neural network-based methods, such as physics-informed neural networks (PINNs) and neural operators, exhibit deficiencies in robustness and generalization.
In this study, we focus on hyperbolic conservation laws by replacing traditional numerical flux with neural operators.
Our approach combines the strengths of both traditional numerical schemes and FNOs, outperforming standard FNO methods in several respects.
arXiv Detail & Related papers (2024-01-03T15:16:25Z) - Implicit neural representation with physics-informed neural networks for
the reconstruction of the early part of room impulse responses [16.89505645696765]
We exploit physics-informed neural networks to reconstruct the early part of missing room impulse responses in a linear array.
The proposed model achieves accurate reconstruction and performance in line with respect to state-of-the-art deep-learning and compress sensing techniques.
arXiv Detail & Related papers (2023-06-20T13:01:00Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Developing Constrained Neural Units Over Time [81.19349325749037]
This paper focuses on an alternative way of defining Neural Networks, that is different from the majority of existing approaches.
The structure of the neural architecture is defined by means of a special class of constraints that are extended also to the interaction with data.
The proposed theory is cast into the time domain, in which data are presented to the network in an ordered manner.
arXiv Detail & Related papers (2020-09-01T09:07:25Z) - Understanding and mitigating gradient pathologies in physics-informed
neural networks [2.1485350418225244]
This work focuses on the effectiveness of physics-informed neural networks in predicting outcomes of physical systems and discovering hidden physics from noisy data.
We present a learning rate annealing algorithm that utilizes gradient statistics during model training to balance the interplay between different terms in composite loss functions.
We also propose a novel neural network architecture that is more resilient to such gradient pathologies.
arXiv Detail & Related papers (2020-01-13T21:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.