PRDP: Progressively Refined Differentiable Physics
- URL: http://arxiv.org/abs/2502.19611v1
- Date: Wed, 26 Feb 2025 22:56:56 GMT
- Title: PRDP: Progressively Refined Differentiable Physics
- Authors: Kanishk Bhatia, Felix Koehler, Nils Thuerey,
- Abstract summary: We show that full accuracy of the network is achievable through physics significantly coarser than fully converged solvers.<n>We propose Progressively Refined Differentiable Physics (PRDP), an approach that identifies the level of physics refinement sufficient for full training accuracy.
- Score: 18.076285588021868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The physics solvers employed for neural network training are primarily iterative, and hence, differentiating through them introduces a severe computational burden as iterations grow large. Inspired by works in bilevel optimization, we show that full accuracy of the network is achievable through physics significantly coarser than fully converged solvers. We propose Progressively Refined Differentiable Physics (PRDP), an approach that identifies the level of physics refinement sufficient for full training accuracy. By beginning with coarse physics, adaptively refining it during training, and stopping refinement at the level adequate for training, it enables significant compute savings without sacrificing network accuracy. Our focus is on differentiating iterative linear solvers for sparsely discretized differential operators, which are fundamental to scientific computing. PRDP is applicable to both unrolled and implicit differentiation. We validate its performance on a variety of learning scenarios involving differentiable physics solvers such as inverse problems, autoregressive neural emulators, and correction-based neural-hybrid solvers. In the challenging example of emulating the Navier-Stokes equations, we reduce training time by 62%.
Related papers
- Enabling Automatic Differentiation with Mollified Graph Neural Operators [75.3183193262225]
We propose the mollified graph neural operator (mGNO), the first method to leverage automatic differentiation and compute emphexact gradients on arbitrary geometries.
For a PDE example on regular grids, mGNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences.
It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough.
arXiv Detail & Related papers (2025-04-11T06:16:30Z) - Metamizer: a versatile neural optimizer for fast and accurate physics simulations [4.717325308876749]
We introduce Metamizer, a novel neural network that iteratively solves a wide range of physical systems with high accuracy.
We demonstrate that Metamizer achieves unprecedented accuracy for deep learning based approaches.
Our results suggest that Metamizer could have a profound impact on future numerical solvers.
arXiv Detail & Related papers (2024-10-10T11:54:31Z) - Differentiability in Unrolled Training of Neural Physics Simulators on Transient Dynamics [22.40149186064481]
Unrolling training trajectories over time influences the inference accuracy of neural network-augmented physics simulators.
We present study across physical systems, network sizes and architectures, training setups, and test scenarios.
arXiv Detail & Related papers (2024-02-20T12:40:31Z) - Evolutionary Optimization of Physics-Informed Neural Networks: Advancing Generalizability by the Baldwin Effect [22.57730294475146]
Physics-informed neural networks (PINNs) are at the forefront of scientific machine learning.
This paper proposes a pioneering approach to advance the generalizability of PINNs through the framework of Baldwinian evolution.
arXiv Detail & Related papers (2023-12-06T02:31:12Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Half-Inverse Gradients for Physical Deep Learning [25.013244956897832]
Integrating differentiable physics simulators into the training process can greatly improve the quality of results.
The gradient-based solvers have a profound effect on the gradient flow as manipulating scales in magnitude and direction is an inherent property of many physical processes.
In this work, we analyze the characteristics of both physical and neural network optimizations to derive a new method that does not suffer from this phenomenon.
arXiv Detail & Related papers (2022-03-18T19:11:04Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - PlasticineLab: A Soft-Body Manipulation Benchmark with Differentiable
Physics [89.81550748680245]
We introduce a new differentiable physics benchmark called PasticineLab.
In each task, the agent uses manipulators to deform the plasticine into the desired configuration.
We evaluate several existing reinforcement learning (RL) methods and gradient-based methods on this benchmark.
arXiv Detail & Related papers (2021-04-07T17:59:23Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.