Neuromorphic quadratic programming for efficient and scalable model predictive control
- URL: http://arxiv.org/abs/2401.14885v3
- Date: Wed, 19 Jun 2024 09:17:05 GMT
- Title: Neuromorphic quadratic programming for efficient and scalable model predictive control
- Authors: Ashish Rao Mangalore, Gabriel Andres Fonseca Guerra, Sumedh R. Risbud, Philipp Stratmann, Andreas Wild,
- Abstract summary: Event-based and memory-integrated neuromorphic architectures promise to solve large optimization problems.
We present a method to solve convex continuous optimization problems with quadratic cost functions and linear constraints on Intel's scalable neuromorphic research chip Loihi 2.
- Score: 0.31457219084519
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Applications in robotics or other size-, weight- and power-constrained autonomous systems at the edge often require real-time and low-energy solutions to large optimization problems. Event-based and memory-integrated neuromorphic architectures promise to solve such optimization problems with superior energy efficiency and performance compared to conventional von Neumann architectures. Here, we present a method to solve convex continuous optimization problems with quadratic cost functions and linear constraints on Intel's scalable neuromorphic research chip Loihi 2. When applied to model predictive control (MPC) problems for the quadruped robotic platform ANYmal, this method achieves over two orders of magnitude reduction in combined energy-delay product compared to the state-of-the-art solver, OSQP, on (edge) CPUs and GPUs with solution times under ten milliseconds for various problem sizes. These results demonstrate the benefit of non-von-Neumann architectures for robotic control applications.
Related papers
- Solving QUBO on the Loihi 2 Neuromorphic Processor [36.40764406612833]
We describe an algorithm for solving Quadratic Unconstrained Binary Optimization problems on the Intel Loihi 2 neuromorphic processor.
Preliminary results show that our approach can generate feasible solutions in as little as 1 ms and up to 37x more energy efficient than two baseline solvers running on a CPU.
arXiv Detail & Related papers (2024-08-06T10:07:43Z) - Optimized QUBO formulation methods for quantum computing [0.4999814847776097]
We show how to apply our techniques in case of an NP-hard optimization problem inspired by a real-world financial scenario.
We follow by submitting instances of this problem to two D-wave quantum annealers, comparing the performances of our novel approach with the standard methods used in these scenarios.
arXiv Detail & Related papers (2024-06-11T19:59:05Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Neural Stochastic Dual Dynamic Programming [99.80617899593526]
We introduce a trainable neural model that learns to map problem instances to a piece-wise linear value function.
$nu$-SDDP can significantly reduce problem solving cost without sacrificing solution quality.
arXiv Detail & Related papers (2021-12-01T22:55:23Z) - Transformer-based Machine Learning for Fast SAT Solvers and Logic
Synthesis [63.53283025435107]
CNF-based SAT and MaxSAT solvers are central to logic synthesis and verification systems.
In this work, we propose a one-shot model derived from the Transformer architecture to solve the MaxSAT problem.
arXiv Detail & Related papers (2021-07-15T04:47:35Z) - Neural network architectures using min-plus algebra for solving certain
high dimensional optimal control problems and Hamilton-Jacobi PDEs [0.0]
We propose two abstract neural network architectures which are respectively used to compute the value function and the optimal control.
A preliminary implementation of our proposed neural network architecture on FPGAs shows promising speed up compared to CPUs.
arXiv Detail & Related papers (2021-05-07T15:40:23Z) - Machine Learning Framework for Quantum Sampling of Highly-Constrained,
Continuous Optimization Problems [101.18253437732933]
We develop a generic, machine learning-based framework for mapping continuous-space inverse design problems into surrogate unconstrained binary optimization problems.
We showcase the framework's performance on two inverse design problems by optimizing thermal emitter topologies for thermophotovoltaic applications and (ii) diffractive meta-gratings for highly efficient beam steering.
arXiv Detail & Related papers (2021-05-06T02:22:23Z) - Speeding up Computational Morphogenesis with Online Neural Synthetic
Gradients [51.42959998304931]
A wide range of modern science and engineering applications are formulated as optimization problems with a system of partial differential equations (PDEs) as constraints.
These PDE-constrained optimization problems are typically solved in a standard discretize-then-optimize approach.
We propose a general framework to speed up PDE-constrained optimization using online neural synthetic gradients (ONSG) with a novel two-scale optimization scheme.
arXiv Detail & Related papers (2021-04-25T22:43:51Z) - Computational Overhead of Locality Reduction in Binary Optimization
Problems [0.0]
We discuss the effects of locality reduction needed for the majority of solvers that can only accommodate 2-local (quadratic) cost functions.
Using a parallel tempering Monte Carlo solver on Microsoft Azure Quantum, we show that post reduction to a corresponding 2-local representation the problems become considerably harder to solve.
arXiv Detail & Related papers (2020-12-17T15:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.