Privacy-Aware Federated nnU-Net for ECG Page Digitization
- URL: http://arxiv.org/abs/2510.22387v1
- Date: Sat, 25 Oct 2025 18:10:05 GMT
- Title: Privacy-Aware Federated nnU-Net for ECG Page Digitization
- Authors: Nader Nemati,
- Abstract summary: Deep neural networks can convert ECG page images into analyzable waveforms, yet centralized training often conflicts with cross-institutional privacy and deployment constraints.<n>A cross-silo federated digitization framework is presented that trains a full-model nnU-Net segmentation backbone without sharing images and aggregates updates across sites.<n>The protocol integrates three standard server-side aggregators--FedAvg, FedProx, and FedAdam--and couples secure aggregation with central, user-level differential privacy to align utility with formal guarantees.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks can convert ECG page images into analyzable waveforms, yet centralized training often conflicts with cross-institutional privacy and deployment constraints. A cross-silo federated digitization framework is presented that trains a full-model nnU-Net segmentation backbone without sharing images and aggregates updates across sites under realistic non-IID heterogeneity (layout, grid style, scanner profile, noise). The protocol integrates three standard server-side aggregators--FedAvg, FedProx, and FedAdam--and couples secure aggregation with central, user-level differential privacy to align utility with formal guarantees. Key features include: (i) end-to-end full-model training and synchronization across clients; (ii) secure aggregation so the server only observes a clipped, weighted sum once a participation threshold is met; (iii) central Gaussian DP with Renyi accounting applied post-aggregation for auditable user-level privacy; and (iv) a calibration-aware digitization pipeline comprising page normalization, trace segmentation, grid-leakage suppression, and vectorization to twelve-lead signals. Experiments on ECG pages rendered from PTB-XL show consistently faster convergence and higher late-round plateaus with adaptive server updates (FedAdam) relative to FedAvg and FedProx, while approaching centralized performance. The privacy mechanism maintains competitive accuracy while preventing exposure of raw images or per-client updates, yielding deployable, auditable guarantees suitable for multi-institution settings.
Related papers
- A Secure and Private Distributed Bayesian Federated Learning Design [56.92336577799572]
Distributed Federated Learning (DFL) enables decentralized model training across large-scale systems without a central parameter server.<n>DFL faces three critical challenges: privacy leakage from honest-but-curious neighbors, slow convergence due to the lack of central coordination, and vulnerability to Byzantine adversaries aiming to degrade model accuracy.<n>We propose a novel DFL framework that integrates Byzantine robustness, privacy preservation, and convergence acceleration.
arXiv Detail & Related papers (2026-02-23T16:12:02Z) - CityGuard: Graph-Aware Private Descriptors for Bias-Resilient Identity Search Across Urban Cameras [16.147944008359957]
CityGuard is a topology-aware transformer for privacy-preserving identity retrieval in decentralized surveillance.<n>A dispersion-adaptive metric learner adjusts instance-level margins according to feature spread, increasing intra-class compactness.<n>Private embedding maps are coupled with compact approximate indexes to support secure and cost-efficient deployment.
arXiv Detail & Related papers (2026-02-20T08:00:17Z) - Replacing Parameters with Preferences: Federated Alignment of Heterogeneous Vision-Language Models [63.70401095689976]
We argue that replacing parameters with preferences represents a more scalable and privacy-preserving future.<n>We propose MoR, a federated alignment framework based on GRPO with Mixture-of-Rewards for heterogeneous VLMs.<n>MoR consistently outperforms federated alignment baselines in generalization, robustness, and cross-client adaptability.
arXiv Detail & Related papers (2026-01-31T03:11:51Z) - FedHypeVAE: Federated Learning with Hypernetwork Generated Conditional VAEs for Differentially Private Embedding Sharing [8.063829694260594]
FedHypeVAE is a differentially private, hypernetwork-driven framework for embedding-level data across decentralized clients.<n>Shared hypernetwork is optimized under differential privacy, ensuring only noise-perturbed, clipped gradients are aggregated across clients.
arXiv Detail & Related papers (2026-01-02T18:40:41Z) - Secure, Verifiable, and Scalable Multi-Client Data Sharing via Consensus-Based Privacy-Preserving Data Distribution [0.0]
CPPDD is an autonomous protocol for secure multi-client data aggregation.<n>It enforces unanimous-release confidentiality through a dual-layer protection mechanism.<n>It achieves 100% malicious deviation detection, exact data recovery, and three-to-four orders of magnitude lower FLOPs compared to MPC and HE baselines.
arXiv Detail & Related papers (2026-01-01T18:12:50Z) - Towards Federated Clustering: A Client-wise Private Graph Aggregation Framework [57.04850867402913]
Federated clustering addresses the challenge of extracting patterns from decentralized, unlabeled data.<n>We propose Structural Privacy-Preserving Federated Graph Clustering (SPP-FGC), a novel algorithm that innovatively leverages local structural graphs as the primary medium for privacy-preserving knowledge sharing.<n>Our framework achieves state-of-the-art performance, improving clustering accuracy by up to 10% (NMI) over federated baselines while maintaining provable privacy guarantees.
arXiv Detail & Related papers (2025-11-14T03:05:22Z) - Perfectly-Private Analog Secure Aggregation in Federated Learning [51.61616734974475]
In federated learning, multiple parties train models locally and share their parameters with a central server, which aggregates them to update a global model.<n>In this paper, a novel secure parameter aggregation method is proposed that employs the torus rather than a finite field.
arXiv Detail & Related papers (2025-09-10T15:22:40Z) - Variational Gaussian Mixture Manifold Models for Client-Specific Federated Personalization [0.0]
VGM$2$ is a geometry-centric PFL framework that learns client-specific parametric UMAP embeddings.<n>Each client maintains a Dirichlet-Normal-Inverse-Gamma posterior over marker weights, means, and variances.<n>VGM$2$ achieves competitive or superior test F1 scores compared to strong baselines.
arXiv Detail & Related papers (2025-09-04T01:28:02Z) - Federated Cross-Domain Click-Through Rate Prediction With Large Language Model Augmentation [4.978132660177235]
We present Federated Cross-Domain CTR Prediction with Large Language Model Augmentation (FedCCTR-LM)<n>Our approach integrates three core innovations. First, the Privacy-Preserving Augmentation Network (PrivNet) employs large language models to enrich user and item representations.<n>Second, the Independent Domain-Specific Transformer with Contrastive Learning (IDST-CL) module disentangles domain-specific and shared user preferences.<n>Third, the Adaptive Local Differential Privacy (AdaLDP) mechanism dynamically calibrates noise injection to achieve an optimal balance between rigorous privacy guarantees and predictive accuracy.
arXiv Detail & Related papers (2025-03-21T06:22:42Z) - Optimizing Cross-Client Domain Coverage for Federated Instruction Tuning of Large Language Models [87.49293964617128]
Federated domain-specific instruction tuning (FedDIT) for large language models (LLMs) aims to enhance performance in specialized domains using distributed private and limited data.<n>We empirically establish that cross-client domain coverage, rather than data heterogeneity, is the pivotal factor.<n>We introduce FedDCA, an algorithm that explicitly maximizes this coverage through diversity-oriented client center selection and retrieval-based augmentation.
arXiv Detail & Related papers (2024-09-30T09:34:31Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Robust Semi-supervised Federated Learning for Images Automatic
Recognition in Internet of Drones [57.468730437381076]
We present a Semi-supervised Federated Learning (SSFL) framework for privacy-preserving UAV image recognition.
There are significant differences in the number, features, and distribution of local data collected by UAVs using different camera modules.
We propose an aggregation rule based on the frequency of the client's participation in training, namely the FedFreq aggregation rule.
arXiv Detail & Related papers (2022-01-03T16:49:33Z) - Gradient-Leakage Resilient Federated Learning [8.945356237213007]
Federated learning(FL) is an emerging distributed learning paradigm with default client privacy.
Recent studies reveal that gradient leakages in FL may compromise the privacy of client training data.
This paper presents a gradient leakage resilient approach to privacy-preserving federated learning with per training example-based client differential privacy.
arXiv Detail & Related papers (2021-07-02T15:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.