A composable autoencoder-based iterative algorithm for accelerating
numerical simulations
- URL: http://arxiv.org/abs/2110.03780v1
- Date: Thu, 7 Oct 2021 20:22:37 GMT
- Title: A composable autoencoder-based iterative algorithm for accelerating
numerical simulations
- Authors: Rishikesh Ranade, Chris Hill, Haiyang He, Amir Maleki, Norman Chang
and Jay Pathak
- Abstract summary: CoAE-MLSim is an unsupervised, lower-dimensional, local method that is motivated from key ideas used in commercial PDE solvers.
It is tested for a variety of complex engineering cases to demonstrate its computational speed, accuracy, scalability, and generalization across different PDE conditions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Numerical simulations for engineering applications solve partial differential
equations (PDE) to model various physical processes. Traditional PDE solvers
are very accurate but computationally costly. On the other hand, Machine
Learning (ML) methods offer a significant computational speedup but face
challenges with accuracy and generalization to different PDE conditions, such
as geometry, boundary conditions, initial conditions and PDE source terms. In
this work, we propose a novel ML-based approach, CoAE-MLSim (Composable
AutoEncoder Machine Learning Simulation), which is an unsupervised,
lower-dimensional, local method, that is motivated from key ideas used in
commercial PDE solvers. This allows our approach to learn better with
relatively fewer samples of PDE solutions. The proposed ML-approach is compared
against commercial solvers for better benchmarks as well as latest
ML-approaches for solving PDEs. It is tested for a variety of complex
engineering cases to demonstrate its computational speed, accuracy,
scalability, and generalization across different PDE conditions. The results
show that our approach captures physics accurately across all metrics of
comparison (including measures such as results on section cuts and lines).
Related papers
- Learning a Neural Solver for Parametric PDE to Enhance Physics-Informed Methods [14.791541465418263]
We propose learning a solver, i.e., solving partial differential equations (PDEs) using a physics-informed iterative algorithm trained on data.
Our method learns to condition a gradient descent algorithm that automatically adapts to each PDE instance.
We demonstrate the effectiveness of our method through empirical experiments on multiple datasets.
arXiv Detail & Related papers (2024-10-09T12:28:32Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Kolmogorov n-Widths for Multitask Physics-Informed Machine Learning (PIML) Methods: Towards Robust Metrics [8.90237460752114]
This topic encompasses a broad array of methods and models aimed at solving a single or a collection of PDE problems, called multitask learning.
PIML is characterized by the incorporation of physical laws into the training process of machine learning models in lieu of large data when solving PDE problems.
arXiv Detail & Related papers (2024-02-16T23:21:40Z) - Efficient Neural PDE-Solvers using Quantization Aware Training [71.0934372968972]
We show that quantization can successfully lower the computational cost of inference while maintaining performance.
Our results on four standard PDE datasets and three network architectures show that quantization-aware training works across settings and three orders of FLOPs magnitudes.
arXiv Detail & Related papers (2023-08-14T09:21:19Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Meta-PDE: Learning to Solve PDEs Quickly Without a Mesh [24.572840023107574]
Partial differential equations (PDEs) are often computationally challenging to solve.
We present a meta-learning based method which learns to rapidly solve problems from a distribution of related PDEs.
arXiv Detail & Related papers (2022-11-03T06:17:52Z) - A composable machine-learning approach for steady-state simulations on
high-resolution grids [0.6554326244334866]
CoMLSim (Composable Machine Learning Simulator) can simulate PDEs on highly-resolved grids.
Our approach combines key principles of traditional PDE solvers with local-learning and low-dimensional manifold techniques.
arXiv Detail & Related papers (2022-10-11T23:50:16Z) - MAgNet: Mesh Agnostic Neural PDE Solver [68.8204255655161]
Climate predictions require fine-temporal resolutions to resolve all turbulent scales in the fluid simulations.
Current numerical model solveers PDEs on grids that are too coarse (3km to 200km on each side)
We design a novel architecture that predicts the spatially continuous solution of a PDE given a spatial position query.
arXiv Detail & Related papers (2022-10-11T14:52:20Z) - Meta-Auto-Decoder for Solving Parametric Partial Differential Equations [32.46080264991759]
Partial Differential Equations (PDEs) are ubiquitous in many disciplines of science and engineering and notoriously difficult to solve.
Our proposed approach, called Meta-Auto-Decoder (MAD), treats solving parametric PDEs as a meta-learning problem.
MAD exhibits faster convergence speed without losing the accuracy compared with other deep learning methods.
arXiv Detail & Related papers (2021-11-15T02:51:42Z) - Speeding up Computational Morphogenesis with Online Neural Synthetic
Gradients [51.42959998304931]
A wide range of modern science and engineering applications are formulated as optimization problems with a system of partial differential equations (PDEs) as constraints.
These PDE-constrained optimization problems are typically solved in a standard discretize-then-optimize approach.
We propose a general framework to speed up PDE-constrained optimization using online neural synthetic gradients (ONSG) with a novel two-scale optimization scheme.
arXiv Detail & Related papers (2021-04-25T22:43:51Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.