Naive Automated Machine Learning
- URL: http://arxiv.org/abs/2111.14514v1
- Date: Mon, 29 Nov 2021 13:12:54 GMT
- Title: Naive Automated Machine Learning
- Authors: Felix Mohr, Marcel Wever
- Abstract summary: We present Naive AutoML, an approach that does precisely this: It optimize the different algorithms of a pre-defined pipeline scheme in isolation.
The isolated generalization leads to substantially reduced search spaces.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An essential task of Automated Machine Learning (AutoML) is the problem of
automatically finding the pipeline with the best generalization performance on
a given dataset. This problem has been addressed with sophisticated black-box
optimization techniques such as Bayesian Optimization, Grammar-Based Genetic
Algorithms, and tree search algorithms. Most of the current approaches are
motivated by the assumption that optimizing the components of a pipeline in
isolation may yield sub-optimal results. We present Naive AutoML, an approach
that does precisely this: It optimizes the different algorithms of a
pre-defined pipeline scheme in isolation. The finally returned pipeline is
obtained by just taking the best algorithm of each slot. The isolated
optimization leads to substantially reduced search spaces, and, surprisingly,
this approach yields comparable and sometimes even better performance than
current state-of-the-art optimizers.
Related papers
- Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Analyzing and Enhancing the Backward-Pass Convergence of Unrolled
Optimization [50.38518771642365]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
A central challenge in this setting is backpropagation through the solution of an optimization problem, which often lacks a closed form.
This paper provides theoretical insights into the backward pass of unrolled optimization, showing that it is equivalent to the solution of a linear system by a particular iterative method.
A system called Folded Optimization is proposed to construct more efficient backpropagation rules from unrolled solver implementations.
arXiv Detail & Related papers (2023-12-28T23:15:18Z) - Performance Evaluation of Evolutionary Algorithms for Analog Integrated
Circuit Design Optimisation [0.0]
An automated sizing approach for analog circuits is presented in this paper.
A targeted search of the search space has been implemented using a particle generation function and a repair-bounds function.
The algorithms are tuned and modified to converge to a better optimal solution.
arXiv Detail & Related papers (2023-10-19T03:26:36Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - Per-run Algorithm Selection with Warm-starting using Trajectory-based
Features [5.073358743426584]
Per-instance algorithm selection seeks to recommend, for a given problem instance, one or several suitable algorithms.
We propose an online algorithm selection scheme which we coin per-run algorithm selection.
We show that our approach outperforms static per-instance algorithm selection.
arXiv Detail & Related papers (2022-04-20T14:30:42Z) - Trajectory-based Algorithm Selection with Warm-starting [2.3823600586675724]
We study the quality and accuracy of performance regression and algorithm selection models in the scenario of predicting different algorithm performances.
We show promising performance of the trajectory-based per-run algorithm selection with warm-starting.
arXiv Detail & Related papers (2022-04-13T14:00:55Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Towards Automatic Bayesian Optimization: A first step involving
acquisition functions [0.0]
Bayesian optimization is the state of the art technique for the optimization of black boxes, i.e., functions where we do not have access to their analytical expression.
We propose a first attempt over automatic bayesian optimization by exploring several techniques that automatically tune the acquisition function.
arXiv Detail & Related papers (2020-03-21T12:22:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.