Re-Evaluating Privacy in Centralized and Decentralized Learning: An Information-Theoretical and Empirical Study
- URL: http://arxiv.org/abs/2409.14261v1
- Date: Sat, 21 Sep 2024 23:05:50 GMT
- Title: Re-Evaluating Privacy in Centralized and Decentralized Learning: An Information-Theoretical and Empirical Study
- Authors: Changlong Ji, Stephane Maag, Richard Heusdens, Qiongxiu Li,
- Abstract summary: Decentralized Federated Learning (DFL) has garnered attention for its robustness and scalability.
Recent work by Pasquini et. al. challenges this view, demonstrating that DFL does not inherently improve privacy against empirical attacks.
- Score: 4.7773230870500605
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Decentralized Federated Learning (DFL) has garnered attention for its robustness and scalability compared to Centralized Federated Learning (CFL). While DFL is commonly believed to offer privacy advantages due to the decentralized control of sensitive data, recent work by Pasquini et, al. challenges this view, demonstrating that DFL does not inherently improve privacy against empirical attacks under certain assumptions. For investigating fully this issue, a formal theoretical framework is required. Our study offers a novel perspective by conducting a rigorous information-theoretical analysis of privacy leakage in FL using mutual information. We further investigate the effectiveness of privacy-enhancing techniques like Secure Aggregation (SA) in both CFL and DFL. Our simulations and real-world experiments show that DFL generally offers stronger privacy preservation than CFL in practical scenarios where a fully trusted server is not available. We address discrepancies in previous research by highlighting limitations in their assumptions about graph topology and privacy attacks, which inadequately capture information leakage in FL.
Related papers
- Privacy Attack in Federated Learning is Not Easy: An Experimental Study [5.065947993017158]
Federated learning (FL) is an emerging distributed machine learning paradigm proposed for privacy preservation.
Recent studies have indicated that FL cannot entirely guarantee privacy protection.
It remains uncertain whether privacy attack FL algorithms are effective in realistic federated environments.
arXiv Detail & Related papers (2024-09-28T10:06:34Z) - Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization [16.418338197742287]
Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source.
Recent findings suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models.
We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection.
arXiv Detail & Related papers (2024-07-12T15:01:09Z) - Vertical Federated Learning for Effectiveness, Security, Applicability: A Survey [67.48187503803847]
Vertical Federated Learning (VFL) is a privacy-preserving distributed learning paradigm.
Recent research has shown promising results addressing various challenges in VFL.
This survey offers a systematic overview of recent developments.
arXiv Detail & Related papers (2024-05-25T16:05:06Z) - Analysis of Privacy Leakage in Federated Large Language Models [18.332535398635027]
We study the privacy analysis of Federated Learning (FL) when used for training Large Language Models (LLMs)
In particular, we design two active membership inference attacks with guaranteed theoretical success rates to assess the privacy leakages of various adapted FL configurations.
Our theoretical findings are translated into practical attacks, revealing substantial privacy vulnerabilities in popular LLMs, including BERT, RoBERTa, DistilBERT, and OpenAI's GPTs.
arXiv Detail & Related papers (2024-03-02T20:25:38Z) - Towards Understanding Generalization and Stability Gaps between Centralized and Decentralized Federated Learning [57.35402286842029]
We show that centralized learning always generalizes better than decentralized learning (DFL)
We also conduct experiments on several common setups in FL to validate that our theoretical analysis is consistent with experimental phenomena and contextually valid in several general and practical scenarios.
arXiv Detail & Related papers (2023-10-05T11:09:42Z) - Decentralized Federated Learning: A Survey and Perspective [45.81975053649379]
Decentralized FL (DFL) is a decentralized network architecture that eliminates the need for a central server.
DFL enables direct communication between clients, resulting in significant savings in communication resources.
arXiv Detail & Related papers (2023-06-02T15:12:58Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.