Accelerating GMRES with Deep Learning in Real-Time
- URL: http://arxiv.org/abs/2103.10975v1
- Date: Fri, 19 Mar 2021 18:21:38 GMT
- Title: Accelerating GMRES with Deep Learning in Real-Time
- Authors: Kevin Luna, Katherine Klymko, Johannes P. Blaschke
- Abstract summary: We show a real-time machine learning algorithm that can be used to accelerate the time-to-solution for GMRES.
Our framework is novel in that is integrates the deep learning algorithm in an in situ fashion.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: GMRES is a powerful numerical solver used to find solutions to extremely
large systems of linear equations. These systems of equations appear in many
applications in science and engineering. Here we demonstrate a real-time
machine learning algorithm that can be used to accelerate the time-to-solution
for GMRES. Our framework is novel in that is integrates the deep learning
algorithm in an in situ fashion: the AI-accelerator gradually learns how to
optimizes the time to solution without requiring user input (such as a
pre-trained data set). We describe how our algorithm collects data and
optimizes GMRES. We demonstrate our algorithm by implementing an accelerated
(MLGMRES) solver in Python. We then use MLGMRES to accelerate a solver for the
Poisson equation -- a class of linear problems that appears in may
applications.
Informed by the properties of formal solutions to the Poisson equation, we
test the performance of different neural networks. Our key takeaway is that
networks which are capable of learning non-local relationships perform well,
without needing to be scaled with the input problem size, making them good
candidates for the extremely large problems encountered in high-performance
computing. For the inputs studied, our method provides a roughly 2$\times$
acceleration.
Related papers
- Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Accelerating the training of single-layer binary neural networks using
the HHL quantum algorithm [58.720142291102135]
We show that useful information can be extracted from the quantum-mechanical implementation of Harrow-Hassidim-Lloyd (HHL)
This paper shows, however, that useful information can be extracted from the quantum-mechanical implementation of HHL, and used to reduce the complexity of finding the solution on the classical side.
arXiv Detail & Related papers (2022-10-23T11:58:05Z) - Algorithms for perturbative analysis and simulation of quantum dynamics [0.0]
We develop general purpose algorithms for computing and utilizing both the Dyson series and Magnus expansion.
We demonstrate how to use these tools to approximate fidelity in a region of model parameter space.
We show how the pre-computation step can be phrased as a multivariable expansion problem with fewer terms than in the original method.
arXiv Detail & Related papers (2022-10-20T21:07:47Z) - Minimizing Entropy to Discover Good Solutions to Recurrent Mixed Integer
Programs [0.0]
Current solvers for mixed-integer programming (MIP) problems are designed to perform well on a wide range of problems.
Recent works have shown that machine learning (ML) can be integrated with an MIP solver to inject domain knowledge and efficiently close the optimality gap.
This paper proposes an online solver that uses the notion of entropy to efficiently build a model with minimal training data and tuning.
arXiv Detail & Related papers (2022-02-07T18:52:56Z) - Fast Block Linear System Solver Using Q-Learning Schduling for Unified
Dynamic Power System Simulations [2.1509980377118767]
This solver uses a novel Q-learning based method for task scheduling.
The simulation on some large power systems shows that our solver is 2-6 times faster than KLU.
arXiv Detail & Related papers (2021-10-12T09:10:27Z) - Neural Fixed-Point Acceleration for Convex Optimization [10.06435200305151]
We present neural fixed-point acceleration which combines ideas from meta-learning and classical acceleration methods.
We apply our framework to SCS, the state-of-the-art solver for convex cone programming.
arXiv Detail & Related papers (2021-07-21T17:59:34Z) - Efficient time stepping for numerical integration using reinforcement
learning [0.15393457051344295]
We propose a data-driven time stepping scheme based on machine learning and meta-learning.
First, one or several (in the case of non-smooth or hybrid systems) base learners are trained using RL.
Then, a meta-learner is trained which (depending on the system state) selects the base learner that appears to be optimal for the current situation.
arXiv Detail & Related papers (2021-04-08T07:24:54Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.