Flexible numerical optimization with ensmallen
- URL: http://arxiv.org/abs/2003.04103v4
- Date: Wed, 15 Nov 2023 14:51:17 GMT
- Title: Flexible numerical optimization with ensmallen
- Authors: Ryan R. Curtin, Marcus Edel, Rahul Ganesh Prabhu, Suryoday Basak,
Zhihao Lou, Conrad Sanderson
- Abstract summary: This report provides an introduction to the ensmallen numerical optimization library.
The library provides a fast and flexible C++ framework for mathematical optimization of arbitrary user-supplied functions.
- Score: 15.78308411537254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This report provides an introduction to the ensmallen numerical optimization
library, as well as a deep dive into the technical details of how it works. The
library provides a fast and flexible C++ framework for mathematical
optimization of arbitrary user-supplied functions. A large set of pre-built
optimizers is provided, including many variants of Stochastic Gradient Descent
and Quasi-Newton optimizers. Several types of objective functions are
supported, including differentiable, separable, constrained, and categorical
objective functions. Implementation of a new optimizer requires only one
method, while a new objective function requires typically only one or two C++
methods. Through internal use of C++ template metaprogramming, ensmallen
provides support for arbitrary user-supplied callbacks and automatic inference
of unsupplied methods without any runtime overhead. Empirical comparisons show
that ensmallen outperforms other optimization frameworks (such as Julia and
SciPy), sometimes by large margins. The library is available at
https://ensmallen.org and is distributed under the permissive BSD license.
Related papers
- LibMOON: A Gradient-based MultiObjective OptimizatioN Library in PyTorch [19.499639344055275]
Multiobjective optimization problems (MOPs) are prevalent in machine learning.
This paper introduces LibMOON, the first multiobjective optimization library that supports state-of-the-art gradient-based methods.
arXiv Detail & Related papers (2024-09-04T07:44:43Z) - How to Boost Any Loss Function [63.573324901948716]
We show that loss functions can be efficiently optimized with boosting.
We show that boosting can achieve a feat not yet known to be possible in the classical $0th$ order setting.
arXiv Detail & Related papers (2024-07-02T14:08:23Z) - CompilerDream: Learning a Compiler World Model for General Code Optimization [58.87557583347996]
We introduce CompilerDream, a model-based reinforcement learning approach to general code optimization.
It comprises a compiler world model that accurately simulates the intrinsic properties of optimization passes and an agent trained on this model to produce effective optimization strategies.
It excels across diverse datasets, surpassing LLVM's built-in optimizations and other state-of-the-art methods in both settings of value prediction and end-to-end code optimization.
arXiv Detail & Related papers (2024-04-24T09:20:33Z) - NUBO: A Transparent Python Package for Bayesian Optimization [0.0]
NUBO is a framework for optimizing black-box functions, such as physical experiments and computer simulators.
It focuses on transparency and user experience to make Bayesian optimization accessible to researchers from all disciplines.
NUBO is written in Python but does not require expert knowledge of Python to optimize simulators and experiments.
arXiv Detail & Related papers (2023-05-11T10:34:27Z) - pysamoo: Surrogate-Assisted Multi-Objective Optimization in Python [7.8140593450932965]
pysamoo is a proposed framework for solving computationally expensive optimization problems.
pysamoo provides multiple optimization methods for handling problems involving time-consuming evaluation functions.
For more information about pysamoo, readers are encouraged to visit: anyoptimization.com/projects/pysamoo.
arXiv Detail & Related papers (2022-04-12T14:55:57Z) - Submodlib: A Submodular Optimization Library [17.596860081700115]
Submodlib is an open-source, easy-to-use, efficient and scalable Python library for submodular optimization.
Submodlib finds its application in summarization, data subset selection, hyper parameter tuning, efficient training and more.
arXiv Detail & Related papers (2022-02-22T05:48:12Z) - The ensmallen library for flexible numerical optimization [15.78308411537254]
We overview the ensmallen numerical optimization library, which provides a flexible C++ framework for mathematical optimization of user-supplied objective functions.
Many types of objective functions are supported, including general, differentiable, separable, constrained, and categorical.
arXiv Detail & Related papers (2021-08-30T03:49:21Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Why Do Local Methods Solve Nonconvex Problems? [54.284687261929115]
Non-used optimization is ubiquitous in modern machine learning.
We rigorously formalize it for instances of machine learning problems.
We hypothesize a unified explanation for this phenomenon.
arXiv Detail & Related papers (2021-03-24T19:34:11Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.