Beyond Fixed Rounds: Data-Free Early Stopping for Practical Federated Learning
- URL: http://arxiv.org/abs/2601.22669v1
- Date: Fri, 30 Jan 2026 07:42:13 GMT
- Title: Beyond Fixed Rounds: Data-Free Early Stopping for Practical Federated Learning
- Authors: Youngjoon Lee, Hyukjoon Lee, Seungrok Jung, Andy Luo, Jinu Gong, Yang Cao, Joonhyuk Kang,
- Abstract summary: Federated Learning (FL) facilitates decentralized collaborative learning without transmitting raw data.<n>We propose a data-free early stopping framework that determines the optimal stopping point by monitoring the task vector's growth rate using solely server-side parameters.
- Score: 4.643684319119214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) facilitates decentralized collaborative learning without transmitting raw data. However, reliance on fixed global rounds or validation data for hyperparameter tuning hinders practical deployment by incurring high computational costs and privacy risks. To address this, we propose a data-free early stopping framework that determines the optimal stopping point by monitoring the task vector's growth rate using solely server-side parameters. The numerical results on skin lesion/blood cell classification demonstrate that our approach is comparable to validation-based early stopping across various state-of-the-art FL methods. In particular, the proposed framework spends an average of 47/20 (skin lesion/blood cell) rounds to achieve over 12.5%/10.3% higher performance than early stopping based on validation data. To the best of our knowledge, this is the first work to propose an early stopping framework for FL methods without using any validation data.
Related papers
- Towards Anytime-Valid Statistical Watermarking [63.02116925616554]
We develop the first e-value-based watermarking framework, Anchored E-Watermarking, that unifies optimal sampling with anytime-valid inference.<n>Our framework can significantly enhance sample efficiency, reducing the average token budget required for detection by 13-15% relative to state-of-the-art baselines.
arXiv Detail & Related papers (2026-02-19T18:32:26Z) - Fractional-Order Federated Learning [4.1751058176413105]
Federated learning (FL) allows remote clients to train a global model collaboratively while protecting client privacy.<n>Despite its privacy-preserving benefits, FL has significant drawbacks, including slow convergence, high communication cost, and non-independent-and-identically-distributed (non-IID) data.
arXiv Detail & Related papers (2026-02-17T06:25:23Z) - When to Stop Federated Learning: Zero-Shot Generation of Synthetic Validation Data with Generative AI for Early Stopping [5.0740578889286105]
Federated Learning (FL) enables collaborative model training across decentralized devices.<n>We introduce a zero-shot synthetic validation framework that leverages generative AI to monitor model performance.<n>Our approach adaptively stops training near the optimal round, thereby conserving computational resources.
arXiv Detail & Related papers (2025-11-14T12:07:32Z) - Tackling Federated Unlearning as a Parameter Estimation Problem [2.9085589574462816]
This work introduces an efficient Federated Unlearning framework based on information theory.<n>Our method uses second-order Hessian information to identify and selectively reset only the parameters most sensitive to the data being forgotten.
arXiv Detail & Related papers (2025-08-26T14:24:45Z) - The First Few Tokens Are All You Need: An Efficient and Effective Unsupervised Prefix Fine-Tuning Method for Reasoning Models [69.798277882245]
We introduce Unsupervised Prefix Fine-Tuning (UPFT) to enhance large language models' reasoning efficiency.<n>UPFT removes the need for labeled data or exhaustive sampling.<n> Experiments show that UPFT matches the performance of supervised methods.
arXiv Detail & Related papers (2025-03-04T18:56:03Z) - FedCert: Federated Accuracy Certification [8.34167718121698]
Federated Learning (FL) has emerged as a powerful paradigm for training machine learning models in a decentralized manner.
Previous studies have assessed the effectiveness of models in centralized training based on certified accuracy.
This study proposes a method named FedCert to take the first step toward evaluating the robustness of FL systems.
arXiv Detail & Related papers (2024-10-04T01:19:09Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - On the Importance and Applicability of Pre-Training for Federated
Learning [28.238484580662785]
We conduct a systematic study to explore pre-training for federated learning.
We find that pre-training can improve FL, but also close its accuracy gap to the counterpart centralized learning.
We conclude our paper with an attempt to understand the effect of pre-training on FL.
arXiv Detail & Related papers (2022-06-23T06:02:33Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.