Dynamic Meta-Learning for Adaptive XGBoost-Neural Ensembles
- URL: http://arxiv.org/abs/2510.03301v1
- Date: Tue, 30 Sep 2025 07:45:49 GMT
- Title: Dynamic Meta-Learning for Adaptive XGBoost-Neural Ensembles
- Authors: Arthur Sedek,
- Abstract summary: This paper introduces a novel adaptive ensemble framework that synergistically combines XGBoost and neural networks through sophisticated meta-learning.<n> Experimental results demonstrate superior predictive performance and enhanced interpretability across diverse datasets.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel adaptive ensemble framework that synergistically combines XGBoost and neural networks through sophisticated meta-learning. The proposed method leverages advanced uncertainty quantification techniques and feature importance integration to dynamically orchestrate model selection and combination. Experimental results demonstrate superior predictive performance and enhanced interpretability across diverse datasets, contributing to the development of more intelligent and flexible machine learning systems.
Related papers
- An Integrated Fusion Framework for Ensemble Learning Leveraging Gradient Boosting and Fuzzy Rule-Based Models [59.13182819190547]
Fuzzy rule-based models excel in interpretability and have seen widespread application across diverse fields.<n>They face challenges such as complex design specifications and scalability issues with large datasets.<n>This paper proposes an Integrated Fusion Framework that merges the strengths of both paradigms to enhance model performance and interpretability.
arXiv Detail & Related papers (2025-11-11T10:28:23Z) - High-Fidelity Scientific Simulation Surrogates via Adaptive Implicit Neural Representations [51.90920900332569]
Implicit neural representations (INRs) offer a compact and continuous framework for modeling spatially structured data.<n>Recent approaches address this by introducing additional features along rigid geometric structures.<n>We propose a simple yet effective alternative: Feature-Adaptive INR (FA-INR)
arXiv Detail & Related papers (2025-06-07T16:45:17Z) - Deep Unrolled Meta-Learning for Multi-Coil and Multi-Modality MRI with Adaptive Optimization [0.0]
We propose a unified deep meta-learning framework for accelerated magnetic resonance imaging (MRI)<n>We jointly address multi-coil reconstruction and cross-modality synthesis.<n>Our results show significant improvements in PSNR and over conventional supervised learning.
arXiv Detail & Related papers (2025-05-08T04:47:12Z) - Multimodal Magic Elevating Depression Detection with a Fusion of Text and Audio Intelligence [4.92323103166693]
This study proposes an innovative multimodal fusion model based on a teacher-student architecture to enhance the accuracy of depression classification.<n>Our designed model addresses the limitations of traditional methods in feature fusion and modality weight allocation by introducing multi-head attention mechanisms and weighted multimodal transfer learning.<n>Ablation experiments demonstrate that the proposed model attains an F1 score of 99. 1% on the test set, significantly outperforming unimodal and conventional approaches.
arXiv Detail & Related papers (2025-01-28T09:30:29Z) - Regularized Neural Ensemblers [55.15643209328513]
In this study, we explore employing regularized neural networks as ensemble methods.<n>Motivated by the risk of learning low-diversity ensembles, we propose regularizing the ensembling model by randomly dropping base model predictions.<n>We demonstrate this approach provides lower bounds for the diversity within the ensemble, reducing overfitting and improving generalization capabilities.
arXiv Detail & Related papers (2024-10-06T15:25:39Z) - Task adaption by biologically inspired stochastic comodulation [8.59194778459436]
We show that fine-tuning convolutional networks by gain modulation improves on deterministic gain modulation.
Our results suggest that comodulation representations can enhance learning efficiency and performance in multi-task learning.
arXiv Detail & Related papers (2023-11-25T15:21:03Z) - Multi-Objective Optimization of Performance and Interpretability of
Tabular Supervised Machine Learning Models [0.9023847175654603]
Interpretability is quantified via three measures: feature sparsity, interaction sparsity of features, and sparsity of non-monotone feature effects.
We show that our framework is capable of finding diverse models that are highly competitive or outperform state-of-the-art XGBoost or Explainable Boosting Machine models.
arXiv Detail & Related papers (2023-07-17T00:07:52Z) - Adaptive Ensemble Learning: Boosting Model Performance through
Intelligent Feature Fusion in Deep Neural Networks [0.0]
We present an Adaptive Ensemble Learning framework that aims to boost the performance of deep neural networks.
The framework integrates ensemble learning strategies with deep learning architectures to create a more robust and adaptable model.
By leveraging intelligent feature fusion methods, the framework generates more discriminative and effective feature representations.
arXiv Detail & Related papers (2023-04-04T21:49:49Z) - Meta-learning using privileged information for dynamics [66.32254395574994]
We extend the Neural ODE Process model to use additional information within the Learning Using Privileged Information setting.
We validate our extension with experiments showing improved accuracy and calibration on simulated dynamics tasks.
arXiv Detail & Related papers (2021-04-29T12:18:02Z) - Trajectory-wise Multiple Choice Learning for Dynamics Generalization in
Reinforcement Learning [137.39196753245105]
We present a new model-based reinforcement learning algorithm that learns a multi-headed dynamics model for dynamics generalization.
We incorporate context learning, which encodes dynamics-specific information from past experiences into the context latent vector.
Our method exhibits superior zero-shot generalization performance across a variety of control tasks, compared to state-of-the-art RL methods.
arXiv Detail & Related papers (2020-10-26T03:20:42Z) - Learning to Learn Kernels with Variational Random Features [118.09565227041844]
We introduce kernels with random Fourier features in the meta-learning framework to leverage their strong few-shot learning ability.
We formulate the optimization of MetaVRF as a variational inference problem.
We show that MetaVRF delivers much better, or at least competitive, performance compared to existing meta-learning alternatives.
arXiv Detail & Related papers (2020-06-11T18:05:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.