h-analysis and data-parallel physics-informed neural networks
- URL: http://arxiv.org/abs/2302.08835v3
- Date: Wed, 23 Aug 2023 12:11:02 GMT
- Title: h-analysis and data-parallel physics-informed neural networks
- Authors: Paul Escapil-Inchausp\'e and Gonzalo A. Ruz
- Abstract summary: We explore the data-parallel acceleration of machine learning schemes with a focus on physics-informed neural networks (PINNs)
We detail a novel protocol based on $h$-analysis and data-parallel acceleration through the Horovod training framework.
We show that the acceleration is straightforward to implement, does not compromise training, and proves to be highly efficient and controllable.
- Score: 0.7614628596146599
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the data-parallel acceleration of physics-informed machine
learning (PIML) schemes, with a focus on physics-informed neural networks
(PINNs) for multiple graphics processing units (GPUs) architectures. In order
to develop scale-robust and high-throughput PIML models for sophisticated
applications which may require a large number of training points (e.g.,
involving complex and high-dimensional domains, non-linear operators or
multi-physics), we detail a novel protocol based on $h$-analysis and
data-parallel acceleration through the Horovod training framework. The protocol
is backed by new convergence bounds for the generalization error and the
train-test gap. We show that the acceleration is straightforward to implement,
does not compromise training, and proves to be highly efficient and
controllable, paving the way towards generic scale-robust PIML. Extensive
numerical experiments with increasing complexity illustrate its robustness and
consistency, offering a wide range of possibilities for real-world simulations.
Related papers
- Physics-informed MeshGraphNets (PI-MGNs): Neural finite element solvers
for non-stationary and nonlinear simulations on arbitrary meshes [13.41003911618347]
This work introduces PI-MGNs, a hybrid approach that combines PINNs and MGNs to solve non-stationary and nonlinear partial differential equations (PDEs) on arbitrary meshes.
Results show that the model scales well to large and complex meshes, although it is trained on small generic meshes only.
arXiv Detail & Related papers (2024-02-16T13:34:51Z) - Reduced Simulations for High-Energy Physics, a Middle Ground for
Data-Driven Physics Research [0.0]
Subatomic particle track reconstruction is a vital task in High-Energy Physics experiments.
We provide the REDuced VIrtual Detector (REDVID) as a complexity-reduced detector model and particle collision event simulator combo.
arXiv Detail & Related papers (2023-08-30T12:50:45Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - On Fast Simulation of Dynamical System with Neural Vector Enhanced
Numerical Solver [59.13397937903832]
We introduce a deep learning-based corrector called Neural Vector (NeurVec)
NeurVec can compensate for integration errors and enable larger time step sizes in simulations.
Our experiments on a variety of complex dynamical system benchmarks demonstrate that NeurVec exhibits remarkable generalization capability.
arXiv Detail & Related papers (2022-08-07T09:02:18Z) - Semi-Parametric Inducing Point Networks and Neural Processes [15.948270454686197]
Semi-parametric inducing point networks (SPIN) can query the training set at inference time in a compute-efficient manner.
SPIN attains linear complexity via a cross-attention mechanism between datapoints inspired by inducing point methods.
In our experiments, SPIN reduces memory requirements, improves accuracy across a range of meta-learning tasks, and improves state-of-the-art performance on an important practical problem, genotype imputation.
arXiv Detail & Related papers (2022-05-24T01:42:46Z) - A Graph Deep Learning Framework for High-Level Synthesis Design Space
Exploration [11.154086943903696]
High-Level Synthesis is a solution for fast prototyping application-specific hardware.
We propose HLS, for the first time in the literature, graph neural networks that jointly predict acceleration performance and hardware costs.
We show that our approach achieves prediction accuracy comparable with that of commonly used simulators.
arXiv Detail & Related papers (2021-11-29T18:17:45Z) - Adaptive Anomaly Detection for Internet of Things in Hierarchical Edge
Computing: A Contextual-Bandit Approach [81.5261621619557]
We propose an adaptive anomaly detection scheme with hierarchical edge computing (HEC)
We first construct multiple anomaly detection DNN models with increasing complexity, and associate each of them to a corresponding HEC layer.
Then, we design an adaptive model selection scheme that is formulated as a contextual-bandit problem and solved by using a reinforcement learning policy network.
arXiv Detail & Related papers (2021-08-09T08:45:47Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Model-Driven Beamforming Neural Networks [47.754731555563836]
This article introduces general data- and model-driven beamforming neural networks (BNNs)
It presents various possible learning strategies, and also discusses complexity reduction for the DL-based BNNs.
We also offer enhancement methods such as training-set augmentation and transfer learning in order to improve the generality of BNNs.
arXiv Detail & Related papers (2020-01-15T12:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.