PyBADS: Fast and robust black-box optimization in Python
- URL: http://arxiv.org/abs/2306.15576v1
- Date: Tue, 27 Jun 2023 15:54:44 GMT
- Title: PyBADS: Fast and robust black-box optimization in Python
- Authors: Gurjeet Sangra Singh, Luigi Acerbi
- Abstract summary: PyBADS is an implementation of the Adaptive Direct Search (BADS) algorithm for fast and robust black-box optimization.
It comes along with an easy-to-use Python interface for running the algorithm for running the results.
- Score: 11.4219428942199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: PyBADS is a Python implementation of the Bayesian Adaptive Direct Search
(BADS) algorithm for fast and robust black-box optimization (Acerbi and Ma
2017). BADS is an optimization algorithm designed to efficiently solve
difficult optimization problems where the objective function is rough
(non-convex, non-smooth), mildly expensive (e.g., the function evaluation
requires more than 0.1 seconds), possibly noisy, and gradient information is
unavailable. With BADS, these issues are well addressed, making it an excellent
choice for fitting computational models using methods such as
maximum-likelihood estimation. The algorithm scales efficiently to black-box
functions with up to $D \approx 20$ continuous input parameters and supports
bounds or no constraints. PyBADS comes along with an easy-to-use Pythonic
interface for running the algorithm and inspecting its results. PyBADS only
requires the user to provide a Python function for evaluating the target
function, and optionally other constraints.
Extensive benchmarks on both artificial test problems and large real
model-fitting problems models drawn from cognitive, behavioral and
computational neuroscience, show that BADS performs on par with or better than
many other common and state-of-the-art optimizers (Acerbi and Ma 2017), making
it a general model-fitting tool which provides fast and robust solutions.
Related papers
- Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - PyVBMC: Efficient Bayesian inference in Python [8.924669503280333]
PyVBMC is a Python implementation of the Variational Bayesian Monte Carlo (VBMC) algorithm for posterior and model inference.
VBMC is designed for efficient parameter estimation and model assessment when model evaluations are mildly-to-very expensive.
arXiv Detail & Related papers (2023-03-16T17:37:22Z) - PyEPO: A PyTorch-based End-to-End Predict-then-Optimize Library for
Linear and Integer Programming [9.764407462807588]
We present the PyEPO package, a PyTorchbased end-to-end predict-then-optimize library in Python.
PyEPO is the first such generic tool for linear and integer programming with predicted objective function coefficients.
arXiv Detail & Related papers (2022-06-28T18:33:55Z) - Speeding Up OPFython with Numba [0.0]
Optimum-Path Forest (OPF) has proven to be a state-of-the-art algorithm comparable to Logistic Regressors, Support Vector Machines.
Recently, its Python-based version, denoted as OPFython, has been proposed to provide a more friendly framework and a faster prototyping environment.
This paper proposes a simple yet highly efficient speed up using the Numba package, which accelerates Numpy-based calculations and attempts to increase the algorithm's overall performance.
arXiv Detail & Related papers (2021-06-22T14:39:32Z) - Bayesian Optimistic Optimisation with Exponentially Decaying Regret [58.02542541410322]
The current practical BO algorithms have regret bounds ranging from $mathcalO(fraclogNsqrtN)$ to $mathcal O(e-sqrtN)$, where $N$ is the number of evaluations.
This paper explores the possibility of improving the regret bound in the noiseless setting by intertwining concepts from BO and tree-based optimistic optimisation.
We propose the BOO algorithm, a first practical approach which can achieve an exponential regret bound with order $mathcal O(N-sqrt
arXiv Detail & Related papers (2021-05-10T13:07:44Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Scalable Combinatorial Bayesian Optimization with Tractable Statistical
models [44.25245545568633]
We study the problem of optimizing blackbox functions over Relaxation spaces (e.g., sets, sequences, trees, and graphs)
Based on recent advances in submodular relaxation, we study an approach as Parametrized Submodular (PSR) towards the goal of improving the scalability and accuracy of solving AFO problems for BOCS model.
Experiments on diverse benchmark problems show significant improvements with PSR for BOCS model.
arXiv Detail & Related papers (2020-08-18T22:56:46Z) - A Two-Timescale Framework for Bilevel Optimization: Complexity Analysis
and Application to Actor-Critic [142.1492359556374]
Bilevel optimization is a class of problems which exhibit a two-level structure.
We propose a two-timescale approximation (TTSA) algorithm for tackling such a bilevel problem.
We show that a two-timescale natural actor-critic policy optimization algorithm can be viewed as a special case of our TTSA framework.
arXiv Detail & Related papers (2020-07-10T05:20:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.