Neuromorphic Bayesian Optimization in Lava
- URL: http://arxiv.org/abs/2305.11060v1
- Date: Thu, 18 May 2023 15:54:23 GMT
- Title: Neuromorphic Bayesian Optimization in Lava
- Authors: Shay Snyder (1), Sumedh R. Risbud (2), and Maryam Parsa (1) ((1)
George Mason University, (2) Intel Labs)
- Abstract summary: We introduce Lava Bayesian Optimization (LavaBO) as a contribution to the open-source Lava Software Framework.
LavaBO is the first step towards developing a BO system compatible with heterogeneous, fine-grained parallel, in-memory neuromorphic computing architectures.
We evaluate the algorithmic performance of the LavaBO system on multiple problems such as training state-of-the-art spiking neural network through back-propagation and evolutionary learning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ever-increasing demands of computationally expensive and high-dimensional
problems require novel optimization methods to find near-optimal solutions in a
reasonable amount of time. Bayesian Optimization (BO) stands as one of the best
methodologies for learning the underlying relationships within multi-variate
problems. This allows users to optimize time consuming and computationally
expensive black-box functions in feasible time frames. Existing BO
implementations use traditional von-Neumann architectures, in which data and
memory are separate. In this work, we introduce Lava Bayesian Optimization
(LavaBO) as a contribution to the open-source Lava Software Framework. LavaBO
is the first step towards developing a BO system compatible with heterogeneous,
fine-grained parallel, in-memory neuromorphic computing architectures (e.g.,
Intel's Loihi platform). We evaluate the algorithmic performance of the LavaBO
system on multiple problems such as training state-of-the-art spiking neural
network through back-propagation and evolutionary learning. Compared to
traditional algorithms (such as grid and random search), we highlight the
ability of LavaBO to explore the parameter search space with fewer expensive
function evaluations, while discovering the optimal solutions.
Related papers
- EARL-BO: Reinforcement Learning for Multi-Step Lookahead, High-Dimensional Bayesian Optimization [1.8655559150764562]
This paper presents a novel reinforcement learning (RL)-based framework for multi-step lookahead BO in high-dimensional black-box optimization problems.
We first introduce an Attention-DeepSets encoder to represent the state of knowledge to the RL agent and employ off-policy learning to accelerate its initial training.
We then evaluate a multi-task, fine-tuning procedure based on end-to-end (encoderRL) on-policy learning.
arXiv Detail & Related papers (2024-10-31T19:33:21Z) - SCORE: A 1D Reparameterization Technique to Break Bayesian Optimization's Curse of Dimensionality [0.0]
A 1D reparametrization trick is proposed to break this curse and sustain linear time complexity for BO in high-dimensional landscapes.
This fast and scalable approach named SCORE can successfully find the global minimum of needle-in-a-haystack optimization functions.
arXiv Detail & Related papers (2024-06-18T14:28:29Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Decreasing the Computing Time of Bayesian Optimization using
Generalizable Memory Pruning [56.334116591082896]
We show a wrapper of memory pruning and bounded optimization capable of being used with any surrogate model and acquisition function.
Running BO on high-dimensional or massive data sets becomes intractable due to this time complexity.
All model implementations are run on the MIT Supercloud state-of-the-art computing hardware.
arXiv Detail & Related papers (2023-09-08T14:05:56Z) - Non-Convex Bilevel Optimization with Time-Varying Objective Functions [57.299128109226025]
We propose an online bilevel optimization where the functions can be time-varying and the agent continuously updates the decisions with online data.
Compared to existing algorithms, SOBOW is computationally efficient and does not need to know previous functions.
We show that SOBOW can achieve a sublinear bilevel local regret under mild conditions.
arXiv Detail & Related papers (2023-08-07T06:27:57Z) - Large-Batch, Iteration-Efficient Neural Bayesian Design Optimization [37.339567743948955]
We present a novel Bayesian optimization framework specifically tailored to address the limitations of BO.
Our key contribution is a highly scalable, sample-based acquisition function that performs a non-dominated sorting of objectives.
We show that our acquisition function in combination with different Bayesian neural network surrogates is effective in data-intensive environments with a minimal number of iterations.
arXiv Detail & Related papers (2023-06-01T19:10:57Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Computationally Efficient High-Dimensional Bayesian Optimization via
Variable Selection [0.5439020425818999]
We develop a new computationally efficient high-dimensional BO method that exploits variable selection.
Our method is able to automatically learn axis-aligned sub-spaces, i.e. spaces containing selected variables.
We empirically show the efficacy of our method on several synthetic and real problems.
arXiv Detail & Related papers (2021-09-20T01:55:43Z) - Scalable Combinatorial Bayesian Optimization with Tractable Statistical
models [44.25245545568633]
We study the problem of optimizing blackbox functions over Relaxation spaces (e.g., sets, sequences, trees, and graphs)
Based on recent advances in submodular relaxation, we study an approach as Parametrized Submodular (PSR) towards the goal of improving the scalability and accuracy of solving AFO problems for BOCS model.
Experiments on diverse benchmark problems show significant improvements with PSR for BOCS model.
arXiv Detail & Related papers (2020-08-18T22:56:46Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.