An Adaptive and Near Parameter-free Evolutionary Computation Approach
Towards True Automation in AutoML
- URL: http://arxiv.org/abs/2001.10178v1
- Date: Tue, 28 Jan 2020 05:44:53 GMT
- Title: An Adaptive and Near Parameter-free Evolutionary Computation Approach
Towards True Automation in AutoML
- Authors: Benjamin Patrick Evans, Bing Xue, Mengjie Zhang
- Abstract summary: A common claim of evolutionary computation methods is that they can achieve good results without the need for human intervention.
We propose a near " parameter-free" genetic programming approach, which adapts the hyper parameter values throughout evolution without ever needing to be specified manually.
We apply this to the area of automated machine learning (by extending TPOT) to produce pipelines which can effectively be claimed to be free from human input.
- Score: 4.4181317696554325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common claim of evolutionary computation methods is that they can achieve
good results without the need for human intervention. However, one criticism of
this is that there are still hyperparameters which must be tuned in order to
achieve good performance. In this work, we propose a near "parameter-free"
genetic programming approach, which adapts the hyperparameter values throughout
evolution without ever needing to be specified manually. We apply this to the
area of automated machine learning (by extending TPOT), to produce pipelines
which can effectively be claimed to be free from human input, and show that the
results are competitive with existing state-of-the-art which use hand-selected
hyperparameter values. Pipelines begin with a randomly chosen estimator and
evolve to competitive pipelines automatically. This work moves towards a truly
automatic approach to AutoML.
Related papers
- NiaAutoARM: Automated generation and evaluation of Association Rule Mining pipelines [0.8848340429852071]
We propose a novel Automated Machine Learning method, NiaAutoARM, for constructing the full association rule mining pipelines.
Along with the theoretical representation of the proposed method, we also present a comprehensive experimental evaluation.
arXiv Detail & Related papers (2024-12-30T20:48:51Z) - Adaptive Preference Scaling for Reinforcement Learning with Human Feedback [103.36048042664768]
Reinforcement learning from human feedback (RLHF) is a prevalent approach to align AI systems with human values.
We propose a novel adaptive preference loss, underpinned by distributionally robust optimization (DRO)
Our method is versatile and can be readily adapted to various preference optimization frameworks.
arXiv Detail & Related papers (2024-06-04T20:33:22Z) - ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane Reflections [59.839926875976225]
We propose the ETHER transformation family, which performs Efficient fineTuning via HypErplane Reflections.
In particular, we introduce ETHER and its relaxation ETHER+, which match or outperform existing PEFT methods with significantly fewer parameters.
arXiv Detail & Related papers (2024-05-30T17:26:02Z) - AutoFT: Learning an Objective for Robust Fine-Tuning [60.641186718253735]
Foundation models encode rich representations that can be adapted to downstream tasks by fine-tuning.
Current approaches to robust fine-tuning use hand-crafted regularization techniques.
We propose AutoFT, a data-driven approach for robust fine-tuning.
arXiv Detail & Related papers (2024-01-18T18:58:49Z) - AutoRL Hyperparameter Landscapes [69.15927869840918]
Reinforcement Learning (RL) has shown to be capable of producing impressive results, but its use is limited by the impact of its hyperparameters on performance.
We propose an approach to build and analyze these hyperparameter landscapes not just for one point in time but at multiple points in time throughout training.
This supports the theory that hyperparameters should be dynamically adjusted during training and shows the potential for more insights on AutoRL problems that can be gained through landscape analyses.
arXiv Detail & Related papers (2023-04-05T12:14:41Z) - Hyper-Parameter Auto-Tuning for Sparse Bayesian Learning [72.83293818245978]
We design and learn a neural network (NN)-based auto-tuner for hyper- parameter tuning in sparse Bayesian learning.
We show that considerable improvement in convergence rate and recovery performance can be achieved.
arXiv Detail & Related papers (2022-11-09T12:34:59Z) - Good Intentions: Adaptive Parameter Management via Intent Signaling [50.01012642343155]
We propose a novel intent signaling mechanism that integrates naturally into existing machine learning stacks.
We then describe AdaPM, a fully adaptive, zero-tuning parameter manager based on this mechanism.
In our evaluation, AdaPM matched or outperformed state-of-the-art parameter managers out of the box.
arXiv Detail & Related papers (2022-06-01T13:02:19Z) - Online AutoML: An adaptive AutoML framework for online learning [6.6389732792316005]
This study aims to automate pipeline design for online learning while continuously adapting to data drift.
This system combines the inherent adaptation capabilities of online learners with the fast automated pipeline (re)optimization capabilities of AutoML.
arXiv Detail & Related papers (2022-01-24T15:37:20Z) - HyP-ABC: A Novel Automated Hyper-Parameter Tuning Algorithm Using
Evolutionary Optimization [1.6114012813668934]
We propose HyP-ABC, an automatic hybrid hyper-parameter optimization algorithm using the modified artificial bee colony approach.
Compared to the state-of-the-art techniques, HyP-ABC is more efficient and has a limited number of parameters to be tuned.
arXiv Detail & Related papers (2021-09-11T16:45:39Z) - Hyperboost: Hyperparameter Optimization by Gradient Boosting surrogate
models [0.4079265319364249]
Current state-of-the-art methods leverage Random Forests or Gaussian processes to build a surrogate model.
We propose a new surrogate model based on gradient boosting.
We demonstrate empirically that the new method is able to outperform some state-of-the art techniques across a reasonable sized set of classification problems.
arXiv Detail & Related papers (2021-01-06T22:07:19Z) - Automatic Hyper-Parameter Optimization Based on Mapping Discovery from
Data to Hyper-Parameters [3.37314595161109]
We propose an efficient automatic parameter optimization approach, which is based on the mapping from data to the corresponding hyper- parameters.
We show that the proposed approaches outperform the state-of-the-art apporaches significantly.
arXiv Detail & Related papers (2020-03-03T19:26:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.