DOF: Accelerating High-order Differential Operators with Forward
Propagation
- URL: http://arxiv.org/abs/2402.09730v1
- Date: Thu, 15 Feb 2024 05:59:21 GMT
- Title: DOF: Accelerating High-order Differential Operators with Forward
Propagation
- Authors: Ruichen Li, Chuwei Wang, Haotian Ye, Di He, Liwei Wang
- Abstract summary: We propose an efficient framework, Differential Operator with Forward-propagation (DOF), for calculating general second-order differential operators without losing any precision.
We demonstrate two times improvement in efficiency and reduced memory consumption on any architectures.
Empirical results illustrate that our method surpasses traditional automatic differentiation (AutoDiff) techniques, achieving 2x improvement on the structure and nearly 20x improvement on the sparsity.
- Score: 40.71528485918067
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Solving partial differential equations (PDEs) efficiently is essential for
analyzing complex physical systems. Recent advancements in leveraging deep
learning for solving PDE have shown significant promise. However, machine
learning methods, such as Physics-Informed Neural Networks (PINN), face
challenges in handling high-order derivatives of neural network-parameterized
functions. Inspired by Forward Laplacian, a recent method of accelerating
Laplacian computation, we propose an efficient computational framework,
Differential Operator with Forward-propagation (DOF), for calculating general
second-order differential operators without losing any precision. We provide
rigorous proof of the advantages of our method over existing methods,
demonstrating two times improvement in efficiency and reduced memory
consumption on any architectures. Empirical results illustrate that our method
surpasses traditional automatic differentiation (AutoDiff) techniques,
achieving 2x improvement on the MLP structure and nearly 20x improvement on the
MLP with Jacobian sparsity.
Related papers
- A Stochastic Approach to Bi-Level Optimization for Hyperparameter Optimization and Meta Learning [74.80956524812714]
We tackle the general differentiable meta learning problem that is ubiquitous in modern deep learning.
These problems are often formalized as Bi-Level optimizations (BLO)
We introduce a novel perspective by turning a given BLO problem into a ii optimization, where the inner loss function becomes a smooth distribution, and the outer loss becomes an expected loss over the inner distribution.
arXiv Detail & Related papers (2024-10-14T12:10:06Z) - A Physics-Informed Machine Learning Approach for Solving Distributed Order Fractional Differential Equations [0.0]
This paper introduces a novel methodology for solving distributed-order fractional differential equations using a physics-informed machine learning framework.
By embedding the distributed-order functional equation into the SVR framework, we incorporate physical laws directly into the learning process.
The effectiveness of the proposed approach is validated through a series of numerical experiments on Caputo-based distributed-order fractional differential equations.
arXiv Detail & Related papers (2024-09-05T13:20:10Z) - Accelerating Fractional PINNs using Operational Matrices of Derivative [0.24578723416255746]
This paper presents a novel operational matrix method to accelerate the training of fractional Physics-Informed Neural Networks (fPINNs)
Our approach involves a non-uniform discretization of the fractional Caputo operator, facilitating swift computation of fractional derivatives within Caputo-type fractional differential problems with $0alpha1$.
The effectiveness of our proposed method is validated across diverse differential equations, including Delay Differential Equations (DDEs) and Systems of Differential Algebraic Equations (DAEs)
arXiv Detail & Related papers (2024-01-25T11:00:19Z) - Accelerated primal-dual methods with enlarged step sizes and operator
learning for nonsmooth optimal control problems [3.1006429989273063]
We focus on the application of a primal-dual method, with which different types of variables can be treated individually.
For the accelerated primal-dual method with larger step sizes, its convergence can be proved rigorously while it numerically accelerates the original primal-dual method.
For the operator learning acceleration, we construct deep neural network surrogate models for the involved PDEs.
arXiv Detail & Related papers (2023-07-01T10:39:07Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Efficiently Solving High-Order and Nonlinear ODEs with Rational Fraction
Polynomial: the Ratio Net [3.155317790896023]
This study takes a different approach by introducing neural network architecture for constructing trial functions, known as ratio net.
Through empirical trials, it demonstrated that the proposed method exhibits higher efficiency compared to existing approaches.
The ratio net holds promise for advancing the efficiency and effectiveness of solving differential equations.
arXiv Detail & Related papers (2021-05-18T16:59:52Z) - Efficient Learning of Generative Models via Finite-Difference Score
Matching [111.55998083406134]
We present a generic strategy to efficiently approximate any-order directional derivative with finite difference.
Our approximation only involves function evaluations, which can be executed in parallel, and no gradient computations.
arXiv Detail & Related papers (2020-07-07T10:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.