When to retrain a machine learning model
- URL: http://arxiv.org/abs/2505.14903v1
- Date: Tue, 20 May 2025 20:55:56 GMT
- Title: When to retrain a machine learning model
- Authors: Regol Florence, Schwinn Leo, Sprague Kyle, Coates Mark, Markovich Thomas,
- Abstract summary: A significant challenge in maintaining real-world machine learning models is responding to the continuous and unpredictable evolution of data.<n>We propose an uncertainty-based method that makes decisions by continually forecasting the evolution of model performance evaluated with a bounded metric.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A significant challenge in maintaining real-world machine learning models is responding to the continuous and unpredictable evolution of data. Most practitioners are faced with the difficult question: when should I retrain or update my machine learning model? This seemingly straightforward problem is particularly challenging for three reasons: 1) decisions must be made based on very limited information - we usually have access to only a few examples, 2) the nature, extent, and impact of the distribution shift are unknown, and 3) it involves specifying a cost ratio between retraining and poor performance, which can be hard to characterize. Existing works address certain aspects of this problem, but none offer a comprehensive solution. Distribution shift detection falls short as it cannot account for the cost trade-off; the scarcity of the data, paired with its unusual structure, makes it a poor fit for existing offline reinforcement learning methods, and the online learning formulation overlooks key practical considerations. To address this, we present a principled formulation of the retraining problem and propose an uncertainty-based method that makes decisions by continually forecasting the evolution of model performance evaluated with a bounded metric. Our experiments addressing classification tasks show that the method consistently outperforms existing baselines on 7 datasets.
Related papers
- These Are Not All the Features You Are Looking For: A Fundamental Bottleneck in Supervised Pretraining [10.749875317643031]
Transfer learning is a cornerstone of modern machine learning, promising a way to adapt models pretrained on a broad mix of data to new tasks with minimal new data.<n>We evaluate model transfer from a pretraining mixture to each of its component tasks, assessing whether pretrained features can match the performance of task-specific direct training.<n>We identify a fundamental limitation in deep learning models, where networks fail to learn new features once they encode similar competing features during training.
arXiv Detail & Related papers (2025-06-23T01:04:29Z) - When to Forget? Complexity Trade-offs in Machine Unlearning [23.507879460531264]
Machine Unlearning (MU) aims at removing the influence of specific data points from a trained model.<n>We analyze the efficiency of unlearning methods and establish the first upper and lower bounds on minimax times for this problem.<n>We provide a phase diagram for the unlearning complexity ratio -- a novel metric that compares the computational cost of the best unlearning method to full model retraining.
arXiv Detail & Related papers (2025-02-24T16:56:27Z) - Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning [29.65600202138321]
In high-speed data stream environments, data do not pause to accommodate slow models.
Model's ignorance: the single-pass nature of OCL challenges models to learn effective features within constrained training time.
Model's myopia: the local learning nature of OCL leads the model to adopt overly simplified, task-specific features.
arXiv Detail & Related papers (2024-09-28T05:24:56Z) - Test-Time Adaptation for Combating Missing Modalities in Egocentric Videos [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.<n>We propose a novel approach to address this issue at test time without requiring retraining.<n>MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Class-wise Federated Unlearning: Harnessing Active Forgetting with Teacher-Student Memory Generation [11.638683787598817]
We propose a neuro-inspired federated unlearning framework based on active forgetting.<n>Our framework distinguishes itself from existing methods by utilizing new memories to overwrite old ones.<n>Our method achieves satisfactory unlearning completeness against backdoor attacks.
arXiv Detail & Related papers (2023-07-07T03:07:26Z) - Resilient Constrained Learning [94.27081585149836]
This paper presents a constrained learning approach that adapts the requirements while simultaneously solving the learning task.
We call this approach resilient constrained learning after the term used to describe ecological systems that adapt to disruptions by modifying their operation.
arXiv Detail & Related papers (2023-06-04T18:14:18Z) - Repairing Neural Networks by Leaving the Right Past Behind [23.78437548836594]
Prediction failures of machine learning models often arise from deficiencies in training data.
This work develops a generic framework for both identifying training examples that have given rise to the target failure, and fixing the model through erasing information about them.
arXiv Detail & Related papers (2022-07-11T12:07:39Z) - Stateful Offline Contextual Policy Evaluation and Learning [88.9134799076718]
We study off-policy evaluation and learning from sequential data.
We formalize the relevant causal structure of problems such as dynamic personalized pricing.
We show improved out-of-sample policy performance in this class of relevant problems.
arXiv Detail & Related papers (2021-10-19T16:15:56Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.