Label Propagation Training Schemes for Physics-Informed Neural Networks and Gaussian Processes
- URL: http://arxiv.org/abs/2404.05817v1
- Date: Mon, 8 Apr 2024 18:41:55 GMT
- Title: Label Propagation Training Schemes for Physics-Informed Neural Networks and Gaussian Processes
- Authors: Ming Zhong, Dehao Liu, Raymundo Arroyave, Ulisses Braga-Neto,
- Abstract summary: This paper proposes a semi-supervised methodology for training physics-informed machine learning methods.
We show how these methods can ameliorate the issue of propagating information forward in time.
- Score: 12.027710824379428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a semi-supervised methodology for training physics-informed machine learning methods. This includes self-training of physics-informed neural networks and physics-informed Gaussian processes in isolation, and the integration of the two via co-training. We demonstrate via extensive numerical experiments how these methods can ameliorate the issue of propagating information forward in time, which is a common failure mode of physics-informed machine learning.
Related papers
- Neural Incremental Data Assimilation [8.817223931520381]
We introduce a deep learning approach where the physical system is modeled as a sequence of coarse-to-fine Gaussian prior distributions parametrized by a neural network.
This allows us to define an assimilation operator, which is trained in an end-to-end fashion to minimize the reconstruction error.
We illustrate our approach on chaotic dynamical physical systems with sparse observations, and compare it to traditional variational data assimilation methods.
arXiv Detail & Related papers (2024-06-21T11:42:55Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Nature-Inspired Local Propagation [68.63385571967267]
Natural learning processes rely on mechanisms where data representation and learning are intertwined in such a way as to respect locality.
We show that the algorithmic interpretation of the derived "laws of learning", which takes the structure of Hamiltonian equations, reduces to Backpropagation when the speed of propagation goes to infinity.
This opens the doors to machine learning based on full on-line information that are based the replacement of Backpropagation with the proposed local algorithm.
arXiv Detail & Related papers (2024-02-04T21:43:37Z) - On the Relationships between Graph Neural Networks for the Simulation of
Physical Systems and Classical Numerical Methods [0.0]
Recent developments in Machine Learning approaches for modelling physical systems have begun to mirror the past development of numerical methods in the computational sciences.
We give an overview of simulation approaches, which have not yet found their way into state-of-the-art Machine Learning methods.
We conclude by presenting an outlook on the potential of these approaches for making Machine Learning models for science more efficient.
arXiv Detail & Related papers (2023-03-31T21:51:00Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Neural Galerkin Schemes with Active Learning for High-Dimensional
Evolution Equations [44.89798007370551]
This work proposes Neural Galerkin schemes based on deep learning that generate training data with active learning for numerically solving high-dimensional partial differential equations.
Neural Galerkin schemes build on the Dirac-Frenkel variational principle to train networks by minimizing the residual sequentially over time.
Our finding is that the active form of gathering training data of the proposed Neural Galerkin schemes is key for numerically realizing the expressive power of networks in high dimensions.
arXiv Detail & Related papers (2022-03-02T19:09:52Z) - The Physics of Machine Learning: An Intuitive Introduction for the
Physical Scientist [0.0]
This article is intended for physical scientists who wish to gain deeper insights into machine learning algorithms.
We begin with a review of two energy-based machine learning algorithms, Hopfield networks and Boltzmann machines, and their connection to the Ising model.
We then delve into additional, more "practical," machine learning architectures including feedforward neural networks, convolutional neural networks, and autoencoders.
arXiv Detail & Related papers (2021-11-27T15:12:42Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Physical Gradients for Deep Learning [101.36788327318669]
We find that state-of-the-art training techniques are not well-suited to many problems that involve physical processes.
We propose a novel hybrid training approach that combines higher-order optimization methods with machine learning techniques.
arXiv Detail & Related papers (2021-09-30T12:14:31Z) - Quantum-tailored machine-learning characterization of a superconducting
qubit [50.591267188664666]
We develop an approach to characterize the dynamics of a quantum device and learn device parameters.
This approach outperforms physics-agnostic recurrent neural networks trained on numerically generated and experimental data.
This demonstration shows how leveraging domain knowledge improves the accuracy and efficiency of this characterization task.
arXiv Detail & Related papers (2021-06-24T15:58:57Z) - Physics-based polynomial neural networks for one-shot learning of
dynamical systems from one or a few samples [0.0]
The paper describes practical results on both a simple pendulum and one of the largest worldwide X-ray source.
It is demonstrated in practice that the proposed approach allows recovering complex physics from noisy, limited, and partial observations.
arXiv Detail & Related papers (2020-05-24T09:27:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.