Improvement of Computational Performance of Evolutionary AutoML in a
Heterogeneous Environment
- URL: http://arxiv.org/abs/2301.05102v1
- Date: Thu, 12 Jan 2023 15:59:04 GMT
- Title: Improvement of Computational Performance of Evolutionary AutoML in a
Heterogeneous Environment
- Authors: Nikolay O. Nikitin, Sergey Teryoshkin, Valerii Pokrovskii, Sergey
Pakulin, Denis Nasonov
- Abstract summary: We propose a modular approach to increase the quality of evolutionary optimization for modelling pipelines with a graph-based structure.
The implemented algorithms are available as a part of the open-source framework FEDOT.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Resource-intensive computations are a major factor that limits the
effectiveness of automated machine learning solutions. In the paper, we propose
a modular approach that can be used to increase the quality of evolutionary
optimization for modelling pipelines with a graph-based structure. It consists
of several stages - parallelization, caching and evaluation. Heterogeneous and
remote resources can be involved in the evaluation stage. The conducted
experiments confirm the correctness and effectiveness of the proposed approach.
The implemented algorithms are available as a part of the open-source framework
FEDOT.
Related papers
- Reinforcement learning for anisotropic p-adaptation and error estimation in high-order solvers [0.37109226820205005]
We present a novel approach to automate and optimize anisotropic p-adaptation in high-order h/p using Reinforcement Learning (RL)
We develop an offline training approach, decoupled from the main solver, which shows minimal overcost when performing simulations.
We derive an inexpensive RL-based error estimation approach that enables the quantification of local discretization errors.
arXiv Detail & Related papers (2024-07-26T17:55:23Z) - Borrowing Strength in Distributionally Robust Optimization via Hierarchical Dirichlet Processes [35.53901341372684]
Our approach unifies regularized estimation, distributionally robust optimization, and hierarchical Bayesian modeling.
By employing a hierarchical Dirichlet process (HDP) prior, the method effectively handles multi-source data.
Numerical experiments validate the framework's efficacy in improving and stabilizing both prediction and parameter estimation accuracy.
arXiv Detail & Related papers (2024-05-21T19:03:09Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Enhancing Multi-Objective Optimization through Machine Learning-Supported Multiphysics Simulation [1.6685829157403116]
This paper presents a methodological framework for training, self-optimising, and self-organising surrogate models.
We show that surrogate models can be trained on relatively small amounts of data to approximate the underlying simulations accurately.
arXiv Detail & Related papers (2023-09-22T20:52:50Z) - Efficient Model-Free Exploration in Low-Rank MDPs [76.87340323826945]
Low-Rank Markov Decision Processes offer a simple, yet expressive framework for RL with function approximation.
Existing algorithms are either (1) computationally intractable, or (2) reliant upon restrictive statistical assumptions.
We propose the first provably sample-efficient algorithm for exploration in Low-Rank MDPs.
arXiv Detail & Related papers (2023-07-08T15:41:48Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - Stabilizing Q-learning with Linear Architectures for Provably Efficient
Learning [53.17258888552998]
This work proposes an exploration variant of the basic $Q$-learning protocol with linear function approximation.
We show that the performance of the algorithm degrades very gracefully under a novel and more permissive notion of approximation error.
arXiv Detail & Related papers (2022-06-01T23:26:51Z) - Automated Evolutionary Approach for the Design of Composite Machine
Learning Pipelines [48.7576911714538]
The proposed approach is aimed to automate the design of composite machine learning pipelines.
It designs the pipelines with a customizable graph-based structure, analyzes the obtained results, and reproduces them.
The software implementation on this approach is presented as an open-source framework.
arXiv Detail & Related papers (2021-06-26T23:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.