Efficient Neural PDE-Solvers using Quantization Aware Training
- URL: http://arxiv.org/abs/2308.07350v1
- Date: Mon, 14 Aug 2023 09:21:19 GMT
- Title: Efficient Neural PDE-Solvers using Quantization Aware Training
- Authors: Winfried van den Dool, Tijmen Blankevoort, Max Welling, Yuki M. Asano
- Abstract summary: We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
- Score: 71.0934372968972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the past years, the application of neural networks as an alternative to
classical numerical methods to solve Partial Differential Equations has emerged
as a potential paradigm shift in this century-old mathematical field. However,
in terms of practical applicability, computational cost remains a substantial
bottleneck. Classical approaches try to mitigate this challenge by limiting the
spatial resolution on which the PDEs are defined. For neural PDE solvers, we
can do better: Here, we investigate the potential of state-of-the-art
quantization methods on reducing computational costs. We show that quantizing
the network weights and activations can successfully lower the computational
cost of inference while maintaining performance. Our results on four standard
PDE datasets and three network architectures show that quantization-aware
training works across settings and three orders of FLOPs magnitudes. Finally,
we empirically demonstrate that Pareto-optimality of computational cost vs
performance is almost always achieved only by incorporating quantization.
Related papers
- Quantifying Training Difficulty and Accelerating Convergence in Neural Network-Based PDE Solvers [9.936559796069844]
We investigate the training dynamics of neural network-based PDE solvers.
We find that two techniques, partition of unity (PoU) and variance scaling (VS) enhance the effective rank.
Experiments using popular PDE-solving frameworks, such as PINN, Deep Ritz, and the operator learning framework DeepOnet, confirm that these techniques consistently speed up convergence.
arXiv Detail & Related papers (2024-10-08T19:35:19Z) - Predicting Probabilities of Error to Combine Quantization and Early Exiting: QuEE [68.6018458996143]
We propose a more general dynamic network that can combine both quantization and early exit dynamic network: QuEE.
Our algorithm can be seen as a form of soft early exiting or input-dependent compression.
The crucial factor of our approach is accurate prediction of the potential accuracy improvement achievable through further computation.
arXiv Detail & Related papers (2024-06-20T15:25:13Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Constrained or Unconstrained? Neural-Network-Based Equation Discovery from Data [0.0]
We represent the PDE as a neural network and use an intermediate state representation similar to a Physics-Informed Neural Network (PINN)
We present a penalty method and a widely used trust-region barrier method to solve this constrained optimization problem.
Our results on the Burgers' and the Korteweg-De Vreis equations demonstrate that the latter constrained method outperforms the penalty method.
arXiv Detail & Related papers (2024-05-30T01:55:44Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Quantum Fourier Networks for Solving Parametric PDEs [4.409836695738518]
Recently, a deep learning architecture called Fourier Neural Operator (FNO) proved to be capable of learning solutions of given PDE families for any initial conditions as input.
We propose quantum algorithms inspired by the classical FNO, which result in time complexity logarithmic in the number of evaluations.
arXiv Detail & Related papers (2023-06-27T12:21:02Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Solving Coupled Differential Equation Groups Using PINO-CDE [42.363646159367946]
PINO-CDE is a deep learning framework for solving coupled differential equation groups (CDEs)
Based on the theory of physics-informed neural operator (PINO), PINO-CDE uses a single network for all quantities in a CDEs.
This framework integrates engineering dynamics and deep learning technologies and may reveal a new concept for CDEs solving and uncertainty propagation.
arXiv Detail & Related papers (2022-10-01T08:39:24Z) - Physics-constrained Unsupervised Learning of Partial Differential
Equations using Meshes [1.066048003460524]
Graph neural networks show promise in accurately representing irregularly meshed objects and learning their dynamics.
In this work, we represent meshes naturally as graphs, process these using Graph Networks, and formulate our physics-based loss to provide an unsupervised learning framework for partial differential equations (PDE)
Our framework will enable the application of PDE solvers in interactive settings, such as model-based control of soft-body deformations.
arXiv Detail & Related papers (2022-03-30T19:22:56Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.