Neural network architectures using min-plus algebra for solving certain
high dimensional optimal control problems and Hamilton-Jacobi PDEs
- URL: http://arxiv.org/abs/2105.03336v2
- Date: Wed, 29 Mar 2023 19:46:59 GMT
- Title: Neural network architectures using min-plus algebra for solving certain
high dimensional optimal control problems and Hamilton-Jacobi PDEs
- Authors: J\'er\^ome Darbon and Peter M. Dower and Tingwei Meng
- Abstract summary: We propose two abstract neural network architectures which are respectively used to compute the value function and the optimal control.
A preliminary implementation of our proposed neural network architecture on FPGAs shows promising speed up compared to CPUs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Solving high dimensional optimal control problems and corresponding
Hamilton-Jacobi PDEs are important but challenging problems in control
engineering. In this paper, we propose two abstract neural network
architectures which are respectively used to compute the value function and the
optimal control for certain class of high dimensional optimal control problems.
We provide the mathematical analysis for the two abstract architectures. We
also show several numerical results computed using the deep neural network
implementations of these abstract architectures. A preliminary implementation
of our proposed neural network architecture on FPGAs shows promising speed up
compared to CPUs. This work paves the way to leverage efficient dedicated
hardware designed for neural networks to solve high dimensional optimal control
problems and Hamilton-Jacobi PDEs.
Related papers
- Neuromorphic quadratic programming for efficient and scalable model predictive control [0.31457219084519]
Event-based and memory-integrated neuromorphic architectures promise to solve large optimization problems.
We present a method to solve convex continuous optimization problems with quadratic cost functions and linear constraints on Intel's scalable neuromorphic research chip Loihi 2.
arXiv Detail & Related papers (2024-01-26T14:12:35Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Connections between Numerical Algorithms for PDEs and Neural Networks [8.660429288575369]
We investigate numerous structural connections between numerical algorithms for partial differential equations (PDEs) and neural networks.
Our goal is to transfer the rich set of mathematical foundations from the world of PDEs to neural networks.
arXiv Detail & Related papers (2021-07-30T16:42:45Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Differentiable Neural Architecture Learning for Efficient Neural Network
Design [31.23038136038325]
We introduce a novel emph architecture parameterisation based on scaled sigmoid function.
We then propose a general emphiable Neural Architecture Learning (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
arXiv Detail & Related papers (2021-03-03T02:03:08Z) - On the performance of deep learning for numerical optimization: an
application to protein structure prediction [0.0]
We present a study on the performance of the deep learning models to deal with global optimization problems.
The proposed approach adopts the idea of the neural architecture search (NAS) to generate efficient neural networks.
Experiments reveal that the generated learning models can achieve competitive results when compared to hand-designed algorithms.
arXiv Detail & Related papers (2020-12-17T17:01:30Z) - Neural Architecture Search of SPD Manifold Networks [79.45110063435617]
We propose a new neural architecture search (NAS) problem of Symmetric Positive Definite (SPD) manifold networks.
We first introduce a geometrically rich and diverse SPD neural architecture search space for an efficient SPD cell design.
We exploit a differentiable NAS algorithm on our relaxed continuous search space for SPD neural architecture search.
arXiv Detail & Related papers (2020-10-27T18:08:57Z) - Graph Neural Networks for Scalable Radio Resource Management:
Architecture Design and Theoretical Analysis [31.372548374969387]
We propose to apply graph neural networks (GNNs) to solve large-scale radio resource management problems.
The proposed method is highly scalable and can solve the beamforming problem in an interference channel with $1000$ transceiver pairs within $6$ milliseconds on a single GPU.
arXiv Detail & Related papers (2020-07-15T11:43:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.