Pulling the Carpet Below the Learner's Feet: Genetic Algorithm To Learn Ensemble Machine Learning Model During Concept Drift
- URL: http://arxiv.org/abs/2412.09035v1
- Date: Thu, 12 Dec 2024 07:51:44 GMT
- Title: Pulling the Carpet Below the Learner's Feet: Genetic Algorithm To Learn Ensemble Machine Learning Model During Concept Drift
- Authors: Teddy Lazebnik,
- Abstract summary: When using machine learning (ML) models in realistic and dynamic environments, users need to often handle the challenge of concept drift (CD)
We propose a novel two-level ensemble ML model, which combines a global ML model with a CD detector, operating as an aggregator for a population of ML pipeline models.
We show that the proposed model outperforms a single ML pipeline with a CD algorithm, particularly in scenarios with unknown CD characteristics.
- Score: 0.8158530638728501
- License:
- Abstract: Data-driven models, in general, and machine learning (ML) models, in particular, have gained popularity over recent years with an increased usage of such models across the scientific and engineering domains. When using ML models in realistic and dynamic environments, users need to often handle the challenge of concept drift (CD). In this study, we explore the application of genetic algorithms (GAs) to address the challenges posed by CD in such settings. We propose a novel two-level ensemble ML model, which combines a global ML model with a CD detector, operating as an aggregator for a population of ML pipeline models, each one with an adjusted CD detector by itself responsible for re-training its ML model. In addition, we show one can further improve the proposed model by utilizing off-the-shelf automatic ML methods. Through extensive synthetic dataset analysis, we show that the proposed model outperforms a single ML pipeline with a CD algorithm, particularly in scenarios with unknown CD characteristics. Overall, this study highlights the potential of ensemble ML and CD models obtained through a heuristic and adaptive optimization process such as the GA one to handle complex CD events.
Related papers
- Steering Masked Discrete Diffusion Models via Discrete Denoising Posterior Prediction [88.65168366064061]
We introduce Discrete Denoising Posterior Prediction (DDPP), a novel framework that casts the task of steering pre-trained MDMs as a problem of probabilistic inference.
Our framework leads to a family of three novel objectives that are all simulation-free, and thus scalable.
We substantiate our designs via wet-lab validation, where we observe transient expression of reward-optimized protein sequences.
arXiv Detail & Related papers (2024-10-10T17:18:30Z) - Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild [84.57103623507082]
This paper introduces Model-GLUE, a holistic Large Language Models scaling guideline.
We benchmark existing scaling techniques, especially selective merging, and variants of mixture.
We then formulate an optimal strategy for the selection and aggregation of a heterogeneous model zoo.
Our methodology involves the clustering of mergeable models and optimal merging strategy selection, and the integration of clusters.
arXiv Detail & Related papers (2024-10-07T15:55:55Z) - Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - EMR-Merging: Tuning-Free High-Performance Model Merging [55.03509900949149]
We show that Elect, Mask & Rescale-Merging (EMR-Merging) shows outstanding performance compared to existing merging methods.
EMR-Merging is tuning-free, thus requiring no data availability or any additional training while showing impressive performance.
arXiv Detail & Related papers (2024-05-23T05:25:45Z) - Bayesian Active Learning for Discrete Latent Variable Models [19.852463786440122]
Active learning seeks to reduce the amount of data required to fit the parameters of a model.
latent variable models play a vital role in neuroscience, psychology, and a variety of other engineering and scientific disciplines.
arXiv Detail & Related papers (2022-02-27T19:07:12Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - A multi-stage machine learning model on diagnosis of esophageal
manometry [50.591267188664666]
The framework includes deep-learning models at the swallow-level stage and feature-based machine learning models at the study-level stage.
This is the first artificial-intelligence-style model to automatically predict CC diagnosis of HRM study from raw multi-swallow data.
arXiv Detail & Related papers (2021-06-25T20:09:23Z) - Using machine learning to correct model error in data assimilation and
forecast applications [0.0]
We propose to use this method to correct the error of an existent, knowledge-based model.
The resulting surrogate model is an hybrid model between the original (knowledge-based) model and the ML model.
Using the hybrid surrogate models for DA yields a significantly better analysis than using the original model.
arXiv Detail & Related papers (2020-10-23T18:30:45Z) - Combining data assimilation and machine learning to infer unresolved
scale parametrisation [0.0]
In recent years, machine learning has been proposed to devise data-driven parametrisations of unresolved processes in dynamical numerical models.
Our goal is to go beyond the use of high-resolution simulations and train ML-based parametrisation using direct data.
We show that in both cases the hybrid model yields forecasts with better skill than the truncated model.
arXiv Detail & Related papers (2020-09-09T14:12:11Z) - Macroscopic Traffic Flow Modeling with Physics Regularized Gaussian
Process: Generalized Formulations [5.827236278192557]
This study presents a new modeling framework, named physics regularized Gaussian process (PRGP)
This novel approach can encode physics models, i.e., classical traffic flow models, into the Gaussian process architecture and so as to regularize the Machine Learning training process.
To prove the effectiveness of the proposed model, this paper conducts empirical studies on a real-world dataset that is collected from a stretch of I-15 freeway, Utah.
arXiv Detail & Related papers (2020-07-14T17:27:23Z) - Macroscopic Traffic Flow Modeling with Physics Regularized Gaussian
Process: A New Insight into Machine Learning Applications [14.164058812512371]
This study presents a new modeling framework, named physics regularized machine learning (PRML), to encode classical traffic flow models into the machine learning architecture.
To prove the effectiveness of the proposed model, this paper conducts empirical studies on a real-world dataset which is collected from a stretch of I-15 freeway, Utah.
arXiv Detail & Related papers (2020-02-06T17:22:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.