Differentially private federated learning for localized control of infectious disease dynamics
- URL: http://arxiv.org/abs/2509.14024v1
- Date: Wed, 17 Sep 2025 14:28:04 GMT
- Title: Differentially private federated learning for localized control of infectious disease dynamics
- Authors: Raouf Kerkouche, Henrik Zunker, Mario Fritz, Martin J. Kühn,
- Abstract summary: In times of epidemics, swift reaction is necessary to mitigate epidemic spreading.<n>In this study, we consider a localized strategy based on the German counties and communities managed by the related local health authorities (LHA)<n>For the preservation of privacy to not oppose the availability of detailed situational data, we propose a privacy-preserving forecasting method.
- Score: 35.86592939525651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In times of epidemics, swift reaction is necessary to mitigate epidemic spreading. For this reaction, localized approaches have several advantages, limiting necessary resources and reducing the impact of interventions on a larger scale. However, training a separate machine learning (ML) model on a local scale is often not feasible due to limited available data. Centralizing the data is also challenging because of its high sensitivity and privacy constraints. In this study, we consider a localized strategy based on the German counties and communities managed by the related local health authorities (LHA). For the preservation of privacy to not oppose the availability of detailed situational data, we propose a privacy-preserving forecasting method that can assist public health experts and decision makers. ML methods with federated learning (FL) train a shared model without centralizing raw data. Considering the counties, communities or LHAs as clients and finding a balance between utility and privacy, we study a FL framework with client-level differential privacy (DP). We train a shared multilayer perceptron on sliding windows of recent case counts to forecast the number of cases, while clients exchange only norm-clipped updates and the server aggregated updates with DP noise. We evaluate the approach on COVID-19 data on county-level during two phases. As expected, very strict privacy yields unstable, unusable forecasts. At a moderately strong level, the DP model closely approaches the non-DP model: $R^2= 0.94$ (vs. 0.95) and mean absolute percentage error (MAPE) of 26 % in November 2020; $R^2= 0.88$ (vs. 0.93) and MAPE of 21 % in March 2022. Overall, client-level DP-FL can deliver useful county-level predictions with strong privacy guarantees, and viable privacy budgets depend on epidemic phase, allowing privacy-compliant collaboration among health authorities for local forecasting.
Related papers
- Local Differential Privacy for Federated Learning with Fixed Memory Usage and Per-Client Privacy [10.651246049060166]
Local differential privacy (LDP) offers strong protection by letting each participant privatize updates before transmission.<n>These issues undermine model generalizability, fairness, and compliance with regulations such as HIPAA and Federated learning.<n>We propose L-RDP, a DP method designed for LDP that ensures constant, lower memory usage to reduce dropouts and provides rigorous per-client privacy guarantees.
arXiv Detail & Related papers (2025-10-14T18:32:08Z) - Federated Learning with Enhanced Privacy via Model Splitting and Random Client Participation [7.780051713043537]
Federated Learning (FL) often adopts differential privacy (DP) to protect client data, but the added noise required for privacy guarantees can substantially degrade model accuracy.<n>We propose model-splitting privacy-amplified federated learning (MS-PAFL)<n>In this framework, each client's model is partitioned into a private submodel, retained locally, and a public submodel, shared for global aggregation.
arXiv Detail & Related papers (2025-09-30T07:51:06Z) - An Interactive Framework for Implementing Privacy-Preserving Federated Learning: Experiments on Large Language Models [7.539653242367701]
Federated learning (FL) enhances privacy by keeping user data on local devices.<n>Recent attacks have demonstrated that updates shared by users during training can reveal significant information about their data.<n>We propose a framework that integrates a human entity as a privacy practitioner to determine an optimal trade-off between the model's privacy and utility.
arXiv Detail & Related papers (2025-02-11T23:07:14Z) - Differentially Private Distributed Inference [2.4401219403555814]
Healthcare centers collaborating on clinical trials must balance knowledge sharing with safeguarding sensitive patient data.<n>We address this challenge by using differential privacy (DP) to control information leakage.<n>Agents update belief statistics via log-linear rules, and DP noise provides plausible deniability and rigorous performance guarantees.
arXiv Detail & Related papers (2024-02-13T01:38:01Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Practical Challenges in Differentially-Private Federated Survival
Analysis of Medical Data [57.19441629270029]
In this paper, we take advantage of the inherent properties of neural networks to federate the process of training of survival analysis models.
In the realistic setting of small medical datasets and only a few data centers, this noise makes it harder for the models to converge.
We propose DPFed-post which adds a post-processing stage to the private federated learning scheme.
arXiv Detail & Related papers (2022-02-08T10:03:24Z) - Locally Differentially Private Bayesian Inference [23.882144188177275]
Local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
We provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP.
arXiv Detail & Related papers (2021-10-27T13:36:43Z) - Voting-based Approaches For Differentially Private Federated Learning [87.2255217230752]
This work is inspired by knowledge transfer non-federated privacy learning from Papernot et al.
We design two new DPFL schemes, by voting among the data labels returned from each local model, instead of averaging the gradients.
Our approaches significantly improve the privacy-utility trade-off over the state-of-the-arts in DPFL.
arXiv Detail & Related papers (2020-10-09T23:55:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.