Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving
- URL: http://arxiv.org/abs/2405.02952v1
- Date: Sun, 5 May 2024 14:39:43 GMT
- Title: Accelerating Legacy Numerical Solvers by Non-intrusive Gradient-based Meta-solving
- Authors: Sohei Arisaka, Qianxiao Li,
- Abstract summary: We propose a non-intrusive methodology with a novel gradient estimation technique to combine machine learning and legacy numerical codes without any modification.
We show the advantage of the proposed method over other baselines and present applications of accelerating established non-automatic-differentiable numerical solvers implemented in PETSc.
- Score: 12.707050104493218
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Scientific computing is an essential tool for scientific discovery and engineering design, and its computational cost is always a main concern in practice. To accelerate scientific computing, it is a promising approach to use machine learning (especially meta-learning) techniques for selecting hyperparameters of traditional numerical methods. There have been numerous proposals to this direction, but many of them require automatic-differentiable numerical methods. However, in reality, many practical applications still depend on well-established but non-automatic-differentiable legacy codes, which prevents practitioners from applying the state-of-the-art research to their own problems. To resolve this problem, we propose a non-intrusive methodology with a novel gradient estimation technique to combine machine learning and legacy numerical codes without any modification. We theoretically and numerically show the advantage of the proposed method over other baselines and present applications of accelerating established non-automatic-differentiable numerical solvers implemented in PETSc, a widely used open-source numerical software library.
Related papers
- Control of dynamical systems with neural networks [0.0]
Recent advances in deep learning and automatic differentiation have made applying these methods to control problems increasingly practical.<n>We show the use of neural networks and modern machine-learning libraries to parameterize control inputs across discrete-time and continuous-time systems.<n>We highlight applications in multiple domains, including biology, engineering, physics, and medicine.
arXiv Detail & Related papers (2025-10-06T19:33:00Z) - DaCe AD: Unifying High-Performance Automatic Differentiation for Machine Learning and Scientific Computing [54.73410106410609]
This work presents DaCe AD, a general, efficient automatic differentiation engine that requires no code modifications.<n>DaCe AD uses a novel ILP-based algorithm to optimize the trade-off between storing and recomputing to achieve maximum performance within a given memory constraint.
arXiv Detail & Related papers (2025-09-02T11:09:45Z) - Machine Learning for predicting chaotic systems [0.0]
We show that well-tuned simple methods, as well as untuned baseline methods, often outperform state-of-the-art deep learning models.
These findings underscore the importance of matching prediction methods to data characteristics and available computational resources.
arXiv Detail & Related papers (2024-07-29T16:34:47Z) - PETScML: Second-order solvers for training regression problems in Scientific Machine Learning [0.22499166814992438]
In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis.
We introduce a software built on top of the Portable and Extensible Toolkit for Scientific computation to bridge the gap between deep-learning software and conventional machine-learning techniques.
arXiv Detail & Related papers (2024-03-18T18:59:42Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Design Space Exploration of Approximate Computing Techniques with a
Reinforcement Learning Approach [49.42371633618761]
We propose an RL-based strategy to find approximate versions of an application that balance accuracy degradation and power and computation time reduction.
Our experimental results show a good trade-off between accuracy degradation and decreased power and computation time for some benchmarks.
arXiv Detail & Related papers (2023-12-29T09:10:40Z) - Recent Developments in Machine Learning Methods for Stochastic Control
and Games [3.3993877661368757]
Recently, computational methods based on machine learning have been developed for solving control problems and games.
We focus on deep learning methods that have unlocked the possibility of solving such problems, even in high dimensions or when the structure is very complex.
This paper provides an introduction to these methods and summarizes the state-of-the-art works at the crossroad of machine learning and control and games.
arXiv Detail & Related papers (2023-03-17T21:53:07Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Neural Operator: Is data all you need to model the world? An insight
into the impact of Physics Informed Machine Learning [13.050410285352605]
We provide an insight into how data-driven approaches can complement conventional techniques to solve engineering and physics problems.
We highlight a novel and fast machine learning-based approach to learning the solution operator of a PDE operator learning.
arXiv Detail & Related papers (2023-01-30T23:29:33Z) - Deep learning applied to computational mechanics: A comprehensive
review, state of the art, and the classics [77.34726150561087]
Recent developments in artificial neural networks, particularly deep learning (DL), are reviewed in detail.
Both hybrid and pure machine learning (ML) methods are discussed.
History and limitations of AI are recounted and discussed, with particular attention at pointing out misstatements or misconceptions of the classics.
arXiv Detail & Related papers (2022-12-18T02:03:00Z) - Advancing Reacting Flow Simulations with Data-Driven Models [50.9598607067535]
Key to effective use of machine learning tools in multi-physics problems is to couple them to physical and computer models.
The present chapter reviews some of the open opportunities for the application of data-driven reduced-order modeling of combustion systems.
arXiv Detail & Related papers (2022-09-05T16:48:34Z) - An Extensible Benchmark Suite for Learning to Simulate Physical Systems [60.249111272844374]
We introduce a set of benchmark problems to take a step towards unified benchmarks and evaluation protocols.
We propose four representative physical systems, as well as a collection of both widely used classical time-based and representative data-driven methods.
arXiv Detail & Related papers (2021-08-09T17:39:09Z) - Efficient time stepping for numerical integration using reinforcement
learning [0.15393457051344295]
We propose a data-driven time stepping scheme based on machine learning and meta-learning.
First, one or several (in the case of non-smooth or hybrid systems) base learners are trained using RL.
Then, a meta-learner is trained which (depending on the system state) selects the base learner that appears to be optimal for the current situation.
arXiv Detail & Related papers (2021-04-08T07:24:54Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.