Private Federated Multiclass Post-hoc Calibration
- URL: http://arxiv.org/abs/2510.01987v1
- Date: Thu, 02 Oct 2025 13:05:31 GMT
- Title: Private Federated Multiclass Post-hoc Calibration
- Authors: Samuel Maddock, Graham Cormode, Carsten Maple,
- Abstract summary: In Federated Learning (FL), the goal is to train a global model on data which is distributed across multiple clients and cannot be centralized due to privacy concerns.<n>This work introduces the integration of post-hoc model calibration techniques within FL.<n>We study (1) a federated setting and (2) a user-level Differential Privacy (DP) setting and demonstrate how both federation and DP impacts calibration accuracy.
- Score: 22.40729940679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Calibrating machine learning models so that predicted probabilities better reflect the true outcome frequencies is crucial for reliable decision-making across many applications. In Federated Learning (FL), the goal is to train a global model on data which is distributed across multiple clients and cannot be centralized due to privacy concerns. FL is applied in key areas such as healthcare and finance where calibration is strongly required, yet federated private calibration has been largely overlooked. This work introduces the integration of post-hoc model calibration techniques within FL. Specifically, we transfer traditional centralized calibration methods such as histogram binning and temperature scaling into federated environments and define new methods to operate them under strong client heterogeneity. We study (1) a federated setting and (2) a user-level Differential Privacy (DP) setting and demonstrate how both federation and DP impacts calibration accuracy. We propose strategies to mitigate degradation commonly observed under heterogeneity and our findings highlight that our federated temperature scaling works best for DP-FL whereas our weighted binning approach is best when DP is not required.
Related papers
- CO-PFL: Contribution-Oriented Personalized Federated Learning for Heterogeneous Networks [51.43780477302533]
Contribution-Oriented PFL (CO-PFL) is a novel algorithm that dynamically estimates each client's contribution for global aggregation.<n>CO-PFL consistently surpasses state-of-the-art methods in robustness in personalization accuracy, robustness, scalability and convergence stability.
arXiv Detail & Related papers (2025-10-23T05:10:06Z) - An Interactive Framework for Implementing Privacy-Preserving Federated Learning: Experiments on Large Language Models [7.539653242367701]
Federated learning (FL) enhances privacy by keeping user data on local devices.<n>Recent attacks have demonstrated that updates shared by users during training can reveal significant information about their data.<n>We propose a framework that integrates a human entity as a privacy practitioner to determine an optimal trade-off between the model's privacy and utility.
arXiv Detail & Related papers (2025-02-11T23:07:14Z) - Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Unlocking the Potential of Model Calibration in Federated Learning [15.93119575317457]
We propose Non-Uniform for Federated Learning (NUCFL), a generic framework that integrates FL with the concept of model calibration.<n>OurFL addresses this challenge by dynamically adjusting the model calibration based on relationships between each clients local model and the global model in FL.<n>By doing so,FL effectively aligns calibration needs for the global model while not sacrificing accuracy.
arXiv Detail & Related papers (2024-09-07T20:11:11Z) - Enabling Differentially Private Federated Learning for Speech Recognition: Benchmarks, Adaptive Optimizers and Gradient Clipping [42.2819343711085]
We show that FL with DP is viable under strong privacy guarantees, provided a population of at least several million users.<n>We achieve user-level (7.2, $10-9$)-DP (resp. (4.5, $10-9$)-DP) with only a 1.3% absolute drop in word error rate when extrapolating to high (resp. low) population scales for FL with DP in ASR.
arXiv Detail & Related papers (2023-09-29T19:11:49Z) - Federated Conformal Predictors for Distributed Uncertainty
Quantification [83.50609351513886]
Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning.
In this paper, we extend conformal prediction to the federated learning setting.
We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction framework.
arXiv Detail & Related papers (2023-05-27T19:57:27Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Cali3F: Calibrated Fast Fair Federated Recommendation System [25.388324221293203]
We propose a personalized federated recommendation system training algorithm to improve recommendation performance fairness.
We then adopt a clustering-based aggregation method to accelerate the training process.
Cali3F is a calibrated fast and fair federated recommendation framework.
arXiv Detail & Related papers (2022-05-26T03:05:26Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - On the Practicality of Differential Privacy in Federated Learning by
Tuning Iteration Times [51.61278695776151]
Federated Learning (FL) is well known for its privacy protection when training machine learning models among distributed clients collaboratively.
Recent studies have pointed out that the naive FL is susceptible to gradient leakage attacks.
Differential Privacy (DP) emerges as a promising countermeasure to defend against gradient leakage attacks.
arXiv Detail & Related papers (2021-01-11T19:43:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.