Integration Of Evolutionary Automated Machine Learning With Structural
Sensitivity Analysis For Composite Pipelines
- URL: http://arxiv.org/abs/2312.14770v1
- Date: Fri, 22 Dec 2023 15:39:03 GMT
- Title: Integration Of Evolutionary Automated Machine Learning With Structural
Sensitivity Analysis For Composite Pipelines
- Authors: Nikolay O. Nikitin, Maiia Pinchuk, Valerii Pokrovskii, Peter
Shevchenko, Andrey Getmanov, Yaroslav Aksenkin, Ilia Revin, Andrey Stebenkov,
Ekaterina Poslavskaya, Anna V. Kalyuzhnaya
- Abstract summary: AutoML creates either fixed or flexible pipelines for a given machine learning problem.
flexible pipelines can be structurally overcomplicated and have poor explainability.
We propose the EVOSA approach that compensates for the negative points of flexible pipelines by incorporating a sensitivity analysis.
- Score: 0.38696580294804606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated machine learning (AutoML) systems propose an end-to-end solution to
a given machine learning problem, creating either fixed or flexible pipelines.
Fixed pipelines are task independent constructs: their general composition
remains the same, regardless of the data. In contrast, the structure of
flexible pipelines varies depending on the input, making them finely tailored
to individual tasks. However, flexible pipelines can be structurally
overcomplicated and have poor explainability. We propose the EVOSA approach
that compensates for the negative points of flexible pipelines by incorporating
a sensitivity analysis which increases the robustness and interpretability of
the flexible solutions. EVOSA quantitatively estimates positive and negative
impact of an edge or a node on a pipeline graph, and feeds this information to
the evolutionary AutoML optimizer. The correctness and efficiency of EVOSA was
validated in tabular, multimodal and computer vision tasks, suggesting
generalizability of the proposed approach across domains.
Related papers
- Inducing Point Operator Transformer: A Flexible and Scalable
Architecture for Solving PDEs [7.152311859951986]
We introduce an attention-based model called an inducing-point operator transformer (IPOT)
IPOT is designed to handle any input function and output query while capturing global interactions in a computationally efficient way.
By detaching the inputs/outputs discretizations from the processor with a smaller latent bottleneck, IPOT offers flexibility in processing arbitrary discretizations.
arXiv Detail & Related papers (2023-12-18T06:57:31Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Automated Evolutionary Approach for the Design of Composite Machine
Learning Pipelines [48.7576911714538]
The proposed approach is aimed to automate the design of composite machine learning pipelines.
It designs the pipelines with a customizable graph-based structure, analyzes the obtained results, and reproduces them.
The software implementation on this approach is presented as an open-source framework.
arXiv Detail & Related papers (2021-06-26T23:19:06Z) - FENXI: Deep-learning Traffic Analytics at the Edge [69.34903175081284]
We present FENXI, a system to run complex analytics by leveraging TPU.
FENXI decouples operations and traffic analytics which operates at different granularities.
Our analysis shows that FENXI can sustain forwarding line rate traffic processing requiring only limited resources.
arXiv Detail & Related papers (2021-05-25T08:02:44Z) - Incremental Search Space Construction for Machine Learning Pipeline
Synthesis [4.060731229044571]
Automated machine learning (AutoML) aims for constructing machine learning (ML) pipelines automatically.
We propose a data-centric approach based on meta-features for pipeline construction.
We prove the effectiveness and competitiveness of our approach on 28 data sets used in well-established AutoML benchmarks.
arXiv Detail & Related papers (2021-01-26T17:17:49Z) - AutoWeka4MCPS-AVATAR: Accelerating Automated Machine Learning Pipeline
Composition and Optimisation [13.116806430326513]
We propose a novel method to evaluate the validity of ML pipelines, without their execution, using a surrogate model (AVATAR)
The AVATAR generates a knowledge base by automatically learning the capabilities and effects of ML algorithms on datasets' characteristics.
Instead of executing the original ML pipeline to evaluate its validity, the AVATAR evaluates its surrogate model constructed by capabilities and effects of the ML pipeline components.
arXiv Detail & Related papers (2020-11-21T14:05:49Z) - Controlling for sparsity in sparse factor analysis models: adaptive
latent feature sharing for piecewise linear dimensionality reduction [2.896192909215469]
We propose a simple and tractable parametric feature allocation model which can address key limitations of current latent feature decomposition techniques.
We derive a novel adaptive Factor analysis (aFA), as well as, an adaptive probabilistic principle component analysis (aPPCA) capable of flexible structure discovery and dimensionality reduction.
We show that aPPCA and aFA can infer interpretable high level features both when applied on raw MNIST and when applied for interpreting autoencoder features.
arXiv Detail & Related papers (2020-06-22T16:09:11Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.