PPFL-RDSN: Privacy-Preserving Federated Learning-based Residual Dense Spatial Networks for Encrypted Lossy Image Reconstruction
- URL: http://arxiv.org/abs/2507.00230v3
- Date: Mon, 27 Oct 2025 19:09:31 GMT
- Title: PPFL-RDSN: Privacy-Preserving Federated Learning-based Residual Dense Spatial Networks for Encrypted Lossy Image Reconstruction
- Authors: Peilin He, James Joshi,
- Abstract summary: Reconstructing high-quality images from low-resolution inputs using Residual Spatial Networks (RDSNs) is crucial yet challenging.<n>It poses significant privacy risks, including data leakage and inference attacks, as well as high computational and communication costs.<n>We propose a novel Privacy-Preserving Federated Learning-based DenseN framework specifically tailored for encrypted lossy image reconstruction.
- Score: 1.9193579706947885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reconstructing high-quality images from low-resolution inputs using Residual Dense Spatial Networks (RDSNs) is crucial yet challenging. It is even more challenging in centralized training where multiple collaborating parties are involved, as it poses significant privacy risks, including data leakage and inference attacks, as well as high computational and communication costs. We propose a novel Privacy-Preserving Federated Learning-based RDSN (PPFL-RDSN) framework specifically tailored for encrypted lossy image reconstruction. PPFL-RDSN integrates Federated Learning (FL), local differential privacy, and robust model watermarking techniques to ensure that data remains secure on local clients/devices, safeguards privacy-sensitive information, and maintains model authenticity without revealing underlying data. Empirical evaluations show that PPFL-RDSN achieves comparable performance to the state-of-the-art centralized methods while reducing computational burdens, and effectively mitigates security and privacy vulnerabilities, making it a practical solution for secure and privacy-preserving collaborative computer vision applications.
Related papers
- A Secure and Private Distributed Bayesian Federated Learning Design [56.92336577799572]
Distributed Federated Learning (DFL) enables decentralized model training across large-scale systems without a central parameter server.<n>DFL faces three critical challenges: privacy leakage from honest-but-curious neighbors, slow convergence due to the lack of central coordination, and vulnerability to Byzantine adversaries aiming to degrade model accuracy.<n>We propose a novel DFL framework that integrates Byzantine robustness, privacy preservation, and convergence acceleration.
arXiv Detail & Related papers (2026-02-23T16:12:02Z) - SIDeR: Semantic Identity Decoupling for Unrestricted Face Privacy [53.75084833636302]
We propose SIDeR, a Semantic decoupling-driven framework for unrestricted face privacy protection.<n> SIDeR decomposes a facial image into a machine-recognizable identity feature vector and a visually perceptible semantic appearance component.<n>For authorized access, SIDeR can be restored to its original form when the correct password is provided.
arXiv Detail & Related papers (2026-02-04T19:30:48Z) - RL-MoE: An Image-Based Privacy Preserving Approach In Intelligent Transportation System [0.9831489366502302]
We propose RL-MoE, a novel framework that transforms sensitive visual data into privacy-preserving textual descriptions.<n> RL-MoE combines a Mixture-of-Experts (MoE) architecture for nuanced, multi-aspect scene decomposition with a Reinforcement Learning (RL) agent.<n>Our work provides a practical and scalable solution for building trustworthy AI systems in privacy-sensitive domains.
arXiv Detail & Related papers (2025-08-07T18:07:54Z) - VFEFL: Privacy-Preserving Federated Learning against Malicious Clients via Verifiable Functional Encryption [3.329039715890632]
Federated learning is a promising distributed learning paradigm that enables collaborative model training without exposing local client data.<n>The distributed nature of federated learning makes it particularly vulnerable to attacks raised by malicious clients.<n>This paper proposes a privacy-preserving federated learning framework based on verifiable functional encryption.
arXiv Detail & Related papers (2025-06-15T13:38:40Z) - Privacy-Preserving Federated Embedding Learning for Localized Retrieval-Augmented Generation [60.81109086640437]
We propose a novel framework called Federated Retrieval-Augmented Generation (FedE4RAG)<n>FedE4RAG facilitates collaborative training of client-side RAG retrieval models.<n>We apply homomorphic encryption within federated learning to safeguard model parameters.
arXiv Detail & Related papers (2025-04-27T04:26:02Z) - Enhancing Privacy in Semantic Communication over Wiretap Channels leveraging Differential Privacy [51.028047763426265]
Semantic communication (SemCom) improves transmission efficiency by focusing on task-relevant information.<n> transmitting semantic-rich data over insecure channels introduces privacy risks.<n>This paper proposes a novel SemCom framework that integrates differential privacy mechanisms to protect sensitive semantic features.
arXiv Detail & Related papers (2025-04-23T08:42:44Z) - FedEM: A Privacy-Preserving Framework for Concurrent Utility Preservation in Federated Learning [17.853502904387376]
Federated Learning (FL) enables collaborative training of models across distributed clients without sharing local data, addressing privacy concerns in decentralized systems.<n>We propose Federated Error Minimization (FedEM), a novel algorithm that incorporates controlled perturbations through adaptive noise injection.<n> Experimental results on benchmark datasets demonstrate that FedEM significantly reduces privacy risks and preserves model accuracy, achieving a robust balance between privacy protection and utility preservation.
arXiv Detail & Related papers (2025-03-08T02:48:00Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.<n>Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.<n>This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Homomorphic Encryption-Enabled Federated Learning for Privacy-Preserving Intrusion Detection in Resource-Constrained IoV Networks [20.864048794953664]
This paper proposes a novel framework to address the data privacy issue for Federated Learning (FL)-based Intrusion Detection Systems (IDSs) in Internet-of-Vehicles (IoVs) with limited computational resources.
We first propose a highly-effective framework using homomorphic encryption to secure data that requires offloading to a centralized server for processing.
We develop an effective training algorithm tailored to handle the challenges of FL-based systems with encrypted data.
arXiv Detail & Related papers (2024-07-26T04:19:37Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FedML-HE: An Efficient Homomorphic-Encryption-Based Privacy-Preserving Federated Learning System [24.39699808493429]
Federated Learning trains machine learning models on distributed devices by aggregating local model updates instead of local data.
Privacy concerns arise as the aggregated local models on the server may reveal sensitive personal information by inversion attacks.
We present FedML-HE, the first practical federated learning system with efficient HE-based secure model aggregation.
arXiv Detail & Related papers (2023-03-20T02:44:35Z) - Privacy-Preserving Joint Edge Association and Power Optimization for the
Internet of Vehicles via Federated Multi-Agent Reinforcement Learning [74.53077322713548]
We investigate the privacy-preserving joint edge association and power allocation problem.
The proposed solution strikes a compelling trade-off, while preserving a higher privacy level than the state-of-the-art solutions.
arXiv Detail & Related papers (2023-01-26T10:09:23Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.