Cyclic Boosting -- an explainable supervised machine learning algorithm
- URL: http://arxiv.org/abs/2002.03425v3
- Date: Tue, 5 Jan 2021 16:17:14 GMT
- Title: Cyclic Boosting -- an explainable supervised machine learning algorithm
- Authors: Felix Wick and Ulrich Kerzel and Michael Feindt
- Abstract summary: We propose the novel "Cyclic Boosting" machine learning algorithm.
It allows to efficiently perform accurate regression and classification tasks while at the same time allowing a detailed understanding of how each individual prediction was made.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised machine learning algorithms have seen spectacular advances and
surpassed human level performance in a wide range of specific applications.
However, using complex ensemble or deep learning algorithms typically results
in black box models, where the path leading to individual predictions cannot be
followed in detail. In order to address this issue, we propose the novel
"Cyclic Boosting" machine learning algorithm, which allows to efficiently
perform accurate regression and classification tasks while at the same time
allowing a detailed understanding of how each individual prediction was made.
Related papers
- Learning-Augmented Algorithms with Explicit Predictors [67.02156211760415]
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
Prior research in this context was focused on a paradigm where the predictor is pre-trained on past data and then used as a black box.
In this work, we unpack the predictor and integrate the learning problem it gives rise for within the algorithmic challenge.
arXiv Detail & Related papers (2024-03-12T08:40:21Z) - A Survey From Distributed Machine Learning to Distributed Deep Learning [0.356008609689971]
Distributed machine learning has been proposed, which involves distributing the data and algorithm across several machines.
We divide these algorithms in classification and clustering (traditional machine learning), deep learning and deep reinforcement learning groups.
Based on the investigation of the mentioned algorithms, we highlighted the limitations that should be addressed in future research.
arXiv Detail & Related papers (2023-07-11T13:06:42Z) - A Generalist Neural Algorithmic Learner [18.425083543441776]
We build a single graph neural network processor capable of learning to execute a wide range of algorithms.
We show that it is possible to effectively learn algorithms in a multi-task manner, so long as we can learn to execute them well in a single-task regime.
arXiv Detail & Related papers (2022-09-22T16:41:33Z) - Learning with Differentiable Algorithms [6.47243430672461]
This thesis explores combining classic algorithms and machine learning systems like neural networks.
The thesis formalizes the idea of algorithmic supervision, which allows a neural network to learn from or in conjunction with an algorithm.
In addition, this thesis proposes differentiable algorithms, such as differentiable sorting networks, differentiable sorting gates, and differentiable logic gate networks.
arXiv Detail & Related papers (2022-09-01T17:30:00Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - First-order Optimization for Superquantile-based Supervised Learning [0.0]
We propose a first-order optimization algorithm to minimize a superquantile-based learning objective.
The proposed algorithm is based on smoothing the superquantile function by infimal convolution.
arXiv Detail & Related papers (2020-09-30T11:43:45Z) - Strong Generalization and Efficiency in Neural Programs [69.18742158883869]
We study the problem of learning efficient algorithms that strongly generalize in the framework of neural program induction.
By carefully designing the input / output interfaces of the neural model and through imitation, we are able to learn models that produce correct results for arbitrary input sizes.
arXiv Detail & Related papers (2020-07-07T17:03:02Z) - AutoML-Zero: Evolving Machine Learning Algorithms From Scratch [76.83052807776276]
We show that it is possible to automatically discover complete machine learning algorithms just using basic mathematical operations as building blocks.
We demonstrate this by introducing a novel framework that significantly reduces human bias through a generic search space.
We believe these preliminary successes in discovering machine learning algorithms from scratch indicate a promising new direction in the field.
arXiv Detail & Related papers (2020-03-06T19:00:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.