MILP initialization for solving parabolic PDEs with PINNs
- URL: http://arxiv.org/abs/2501.16153v1
- Date: Mon, 27 Jan 2025 15:46:38 GMT
- Title: MILP initialization for solving parabolic PDEs with PINNs
- Authors: Sirui Li, Federica Bragone, Matthieu Barreau, Kateryna Morozovska,
- Abstract summary: Physics-Informed Neural Networks (PINNs) are a powerful deep learning method capable of providing solutions and parameter estimations of physical systems.
Given the complexity of their neural network structure, the convergence speed is still limited compared to numerical methods.
- Score: 2.5932373010465364
- License:
- Abstract: Physics-Informed Neural Networks (PINNs) are a powerful deep learning method capable of providing solutions and parameter estimations of physical systems. Given the complexity of their neural network structure, the convergence speed is still limited compared to numerical methods, mainly when used in applications that model realistic systems. The network initialization follows a random distribution of the initial weights, as in the case of traditional neural networks, which could lead to severe model convergence bottlenecks. To overcome this problem, we follow current studies that deal with optimal initial weights in traditional neural networks. In this paper, we use a convex optimization model to improve the initialization of the weights in PINNs and accelerate convergence. We investigate two optimization models as a first training step, defined as pre-training, one involving only the boundaries and one including physics. The optimization is focused on the first layer of the neural network part of the PINN model, while the other weights are randomly initialized. We test the methods using a practical application of the heat diffusion equation to model the temperature distribution of power transformers. The PINN model with boundary pre-training is the fastest converging method at the current stage.
Related papers
- Scalable Imaginary Time Evolution with Neural Network Quantum States [0.0]
The representation of a quantum wave function as a neural network quantum state (NQS) provides a powerful variational ansatz for finding the ground states of many-body quantum systems.
We introduce an approach that bypasses the computation of the metric tensor and instead relies exclusively on first-order descent with Euclidean metric.
We make this method adaptive and stable by determining the optimal time step and keeping the target fixed until the energy of the NQS decreases.
arXiv Detail & Related papers (2023-07-28T12:26:43Z) - Speed Limits for Deep Learning [67.69149326107103]
Recent advancement in thermodynamics allows bounding the speed at which one can go from the initial weight distribution to the final distribution of the fully trained network.
We provide analytical expressions for these speed limits for linear and linearizable neural networks.
Remarkably, given some plausible scaling assumptions on the NTK spectra and spectral decomposition of the labels -- learning is optimal in a scaling sense.
arXiv Detail & Related papers (2023-07-27T06:59:46Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Transfer Learning with Physics-Informed Neural Networks for Efficient
Simulation of Branched Flows [1.1470070927586016]
Physics-Informed Neural Networks (PINNs) offer a promising approach to solving differential equations.
We adopt a recently developed transfer learning approach for PINNs and introduce a multi-head model.
We show that our methods provide significant computational speedups in comparison to standard PINNs trained from scratch.
arXiv Detail & Related papers (2022-11-01T01:50:00Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - On feedforward control using physics-guided neural networks: Training
cost regularization and optimized initialization [0.0]
Performance of model-based feedforward controllers is typically limited by the accuracy of the inverse system dynamics model.
This paper proposes a regularization method via identified physical parameters.
It is validated on a real-life industrial linear motor, where it delivers better tracking accuracy and extrapolation.
arXiv Detail & Related papers (2022-01-28T12:51:25Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - Physics-Informed Neural Network Method for Solving One-Dimensional
Advection Equation Using PyTorch [0.0]
PINNs approach allows training neural networks while respecting the PDEs as a strong constraint in the optimization.
In standard small-scale circulation simulations, it is shown that the conventional approach incorporates a pseudo diffusive effect that is almost as large as the effect of the turbulent diffusion model.
Of all the schemes tested, only the PINNs approximation accurately predicted the outcome.
arXiv Detail & Related papers (2021-03-15T05:39:17Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.