Towards a Systematic Approach to Design New Ensemble Learning Algorithms
- URL: http://arxiv.org/abs/2402.06818v1
- Date: Fri, 9 Feb 2024 22:59:20 GMT
- Title: Towards a Systematic Approach to Design New Ensemble Learning Algorithms
- Authors: Jo\~ao Mendes-Moreira, Tiago Mendes-Neves
- Abstract summary: This study revisits the foundational work on ensemble error decomposition.
Recent advancements introduced a "unified theory of diversity"
Our research systematically explores the application of this decomposition to guide the creation of new ensemble learning algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ensemble learning has been a focal point of machine learning research due to
its potential to improve predictive performance. This study revisits the
foundational work on ensemble error decomposition, historically confined to
bias-variance-covariance analysis for regression problems since the 1990s.
Recent advancements introduced a "unified theory of diversity," which proposes
an innovative bias-variance-diversity decomposition framework. Leveraging this
contemporary understanding, our research systematically explores the
application of this decomposition to guide the creation of new ensemble
learning algorithms. Focusing on regression tasks, we employ neural networks as
base learners to investigate the practical implications of this theoretical
framework. This approach used 7 simple ensemble methods, we name them
strategies, for neural networks that were used to generate 21 new ensemble
algorithms. Among these, most of the methods aggregated with the snapshot
strategy, one of the 7 strategies used, showcase superior predictive
performance across diverse datasets w.r.t. the Friedman rank test with the
Conover post-hoc test. Our systematic design approach contributes a suite of
effective new algorithms and establishes a structured pathway for future
ensemble learning algorithm development.
Related papers
- A Unified and General Framework for Continual Learning [58.72671755989431]
Continual Learning (CL) focuses on learning from dynamic and changing data distributions while retaining previously acquired knowledge.
Various methods have been developed to address the challenge of catastrophic forgetting, including regularization-based, Bayesian-based, and memory-replay-based techniques.
This research aims to bridge this gap by introducing a comprehensive and overarching framework that encompasses and reconciles these existing methodologies.
arXiv Detail & Related papers (2024-03-20T02:21:44Z) - Quantized Hierarchical Federated Learning: A Robust Approach to
Statistical Heterogeneity [3.8798345704175534]
We present a novel hierarchical federated learning algorithm that incorporates quantization for communication-efficiency.
We offer a comprehensive analytical framework to evaluate its optimality gap and convergence rate.
Our findings reveal that our algorithm consistently achieves high learning accuracy over a range of parameters.
arXiv Detail & Related papers (2024-03-03T15:40:24Z) - Distributional Bellman Operators over Mean Embeddings [37.5480897544168]
We propose a novel framework for distributional reinforcement learning, based on learning finite-dimensional mean embeddings of return distributions.
We derive several new algorithms for dynamic programming and temporal-difference learning based on this framework.
arXiv Detail & Related papers (2023-12-09T11:36:14Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Federated Learning Aggregation: New Robust Algorithms with Guarantees [63.96013144017572]
Federated learning has been recently proposed for distributed model training at the edge.
This paper presents a complete general mathematical convergence analysis to evaluate aggregation strategies in a federated learning framework.
We derive novel aggregation algorithms which are able to modify their model architecture by differentiating client contributions according to the value of their losses.
arXiv Detail & Related papers (2022-05-22T16:37:53Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Towards Model Agnostic Federated Learning Using Knowledge Distillation [9.947968358822951]
In this work, we initiate a theoretical study of model agnostic communication protocols.
We focus on the setting where the two agents are attempting to perform kernel regression using different kernels.
Our study yields a surprising result -- the most natural algorithm of using alternating knowledge distillation (AKD) imposes overly strong regularization.
arXiv Detail & Related papers (2021-10-28T15:27:51Z) - Transfer Learning Based Multi-Objective Evolutionary Algorithm for
Community Detection of Dynamic Complex Networks [1.693830041971135]
We propose a Feature Transfer Based Multi-Objective Optimization Algorithm (TMOGA) based on transfer learning and traditional multi-objective evolutionary algorithm framework.
We show that our algorithm can achieve better clustering effects compared with the state-of-the-art dynamic network community detection algorithms in diverse test problems.
arXiv Detail & Related papers (2021-09-30T17:16:51Z) - Nonparametric Estimation of Heterogeneous Treatment Effects: From Theory
to Learning Algorithms [91.3755431537592]
We analyze four broad meta-learning strategies which rely on plug-in estimation and pseudo-outcome regression.
We highlight how this theoretical reasoning can be used to guide principled algorithm design and translate our analyses into practice.
arXiv Detail & Related papers (2021-01-26T17:11:40Z) - Reinforcement Learning as Iterative and Amortised Inference [62.997667081978825]
We use the control as inference framework to outline a novel classification scheme based on amortised and iterative inference.
We show that taking this perspective allows us to identify parts of the algorithmic design space which have been relatively unexplored.
arXiv Detail & Related papers (2020-06-13T16:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.