Regulating Clients' Noise Adding in Federated Learning without
Verification
- URL: http://arxiv.org/abs/2302.12735v1
- Date: Fri, 24 Feb 2023 16:44:15 GMT
- Title: Regulating Clients' Noise Adding in Federated Learning without
Verification
- Authors: Shu Hong, Lingjie Duan
- Abstract summary: In federated learning, clients cooperatively train a global model without revealing their raw data but gradients or parameters.
With such privacy concerns, a client may overly add artificial noise to his local updates to compromise the global model training.
This paper proposes a novel pricing mechanism to regulate privacy-sensitive clients without verifying their parameter updates.
- Score: 24.196751469021848
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning (FL), clients cooperatively train a global model
without revealing their raw data but gradients or parameters, while the local
information can still be disclosed from local outputs transmitted to the
parameter server. With such privacy concerns, a client may overly add
artificial noise to his local updates to compromise the global model training,
and we prove the selfish noise adding leads to an infinite price of anarchy
(PoA). This paper proposes a novel pricing mechanism to regulate
privacy-sensitive clients without verifying their parameter updates, unlike
existing privacy mechanisms that assume the server's full knowledge of added
noise. Without knowing the ground truth, our mechanism reaches the social
optimum to best balance the global training error and privacy loss, according
to the difference between a client's updated parameter and all clients' average
parameter. We also improve the FL convergence bound by refining the aggregation
rule at the server to account for different clients' noise variances. Moreover,
we extend our pricing scheme to fit incomplete information of clients' privacy
sensitivities, ensuring their truthful type reporting and the system's ex-ante
budget balance. Simulations show that our pricing scheme greatly improves the
system performance especially when clients have diverse privacy sensitivities.
Related papers
- FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification [8.747592727421596]
Federated learning (FL) enables clients to collaboratively train machine learning models under the coordination of a server.
FedAR can get all clients involved in the global model update to achieve a high-quality global model on the server.
FedAR also depicts impressive performance in the presence of a large number of clients with severe client unavailability.
arXiv Detail & Related papers (2024-07-26T21:56:52Z) - Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning [21.27813247914949]
We propose Robust-HDP, which efficiently estimates the true noise level in clients model updates.
It improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter to server.
arXiv Detail & Related papers (2024-06-05T17:41:42Z) - Clients Collaborate: Flexible Differentially Private Federated Learning
with Guaranteed Improvement of Utility-Privacy Trade-off [34.2117116062642]
We introduce a novel federated learning framework with rigorous privacy guarantees, named FedCEO, to strike a trade-off between model utility and user privacy.
We show that our FedCEO can effectively recover the disrupted semantic information by smoothing the global semantic space.
It observes significant performance improvements and strict privacy guarantees under different privacy settings.
arXiv Detail & Related papers (2024-02-10T17:39:34Z) - Adaptive Differential Privacy in Federated Learning: A Priority-Based
Approach [0.0]
Federated learning (FL) develops global models without direct access to local datasets.
DP offers a framework that gives a privacy guarantee by adding certain amounts of noise to parameters.
We propose adaptive noise addition in FL which decides the value of injected noise based on features' relative importance.
arXiv Detail & Related papers (2024-01-04T03:01:15Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - DYNAFED: Tackling Client Data Heterogeneity with Global Dynamics [60.60173139258481]
Local training on non-iid distributed data results in deflected local optimum.
A natural solution is to gather all client data onto the server, such that the server has a global view of the entire data distribution.
In this paper, we put forth an idea to collect and leverage global knowledge on the server without hindering data privacy.
arXiv Detail & Related papers (2022-11-20T06:13:06Z) - FedCorr: Multi-Stage Federated Learning for Label Noise Correction [80.9366438220228]
Federated learning (FL) is a privacy-preserving distributed learning paradigm that enables clients to jointly train a global model.
We propose $textttFedCorr$, a general multi-stage framework to tackle heterogeneous label noise in FL.
Experiments conducted on CIFAR-10/100 with federated synthetic label noise, and on a real-world noisy dataset, Clothing1M, demonstrate that $textttFedCorr$ is robust to label noise.
arXiv Detail & Related papers (2022-04-10T12:51:18Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Federated Noisy Client Learning [105.00756772827066]
Federated learning (FL) collaboratively aggregates a shared global model depending on multiple local clients.
Standard FL methods ignore the noisy client issue, which may harm the overall performance of the aggregated model.
We propose Federated Noisy Client Learning (Fed-NCL), which is a plug-and-play algorithm and contains two main components.
arXiv Detail & Related papers (2021-06-24T11:09:17Z) - Timely Communication in Federated Learning [65.1253801733098]
We consider a global learning framework in which a parameter server (PS) trains a global model by using $n$ clients without actually storing the client data centrally at a cloud server.
Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.
We find the average age of information experienced by each client and numerically characterize the age-optimal $m$ and $k$ values for a given $n$.
arXiv Detail & Related papers (2020-12-31T18:52:08Z) - Shuffled Model of Federated Learning: Privacy, Communication and
Accuracy Trade-offs [30.58690911428577]
We consider a distributed empirical risk minimization (ERM) optimization problem with communication efficiency and privacy requirements.
We develop (optimal) communication-efficient schemes for private mean estimation for several $ell_p$ spaces.
We demonstrate that one can get the same privacy, optimization-performance operating point developed in recent methods that use full-precision communication.
arXiv Detail & Related papers (2020-08-17T09:41:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.