Maximum Knowledge Orthogonality Reconstruction with Gradients in
Federated Learning
- URL: http://arxiv.org/abs/2310.19222v1
- Date: Mon, 30 Oct 2023 02:01:48 GMT
- Title: Maximum Knowledge Orthogonality Reconstruction with Gradients in
Federated Learning
- Authors: Feng Wang, Senem Velipasalar, M. Cenk Gursoy
- Abstract summary: Federated learning (FL) aims at keeping client data local to preserve privacy.
Most existing FL approaches assume an FL setting with unrealistically small batch size.
We propose a novel and completely analytical approach to reconstruct clients' input data.
- Score: 12.709670487307294
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) aims at keeping client data local to preserve
privacy. Instead of gathering the data itself, the server only collects
aggregated gradient updates from clients. Following the popularity of FL, there
has been considerable amount of work, revealing the vulnerability of FL
approaches by reconstructing the input data from gradient updates. Yet, most
existing works assume an FL setting with unrealistically small batch size, and
have poor image quality when the batch size is large. Other works modify the
neural network architectures or parameters to the point of being suspicious,
and thus, can be detected by clients. Moreover, most of them can only
reconstruct one sample input from a large batch. To address these limitations,
we propose a novel and completely analytical approach, referred to as the
maximum knowledge orthogonality reconstruction (MKOR), to reconstruct clients'
input data. Our proposed method reconstructs a mathematically proven high
quality image from large batches. MKOR only requires the server to send
secretly modified parameters to clients and can efficiently and inconspicuously
reconstruct the input images from clients' gradient updates. We evaluate MKOR's
performance on the MNIST, CIFAR-100, and ImageNet dataset and compare it with
the state-of-the-art works. The results show that MKOR outperforms the existing
approaches, and draws attention to a pressing need for further research on the
privacy protection of FL so that comprehensive defense approaches can be
developed.
Related papers
- FedAR: Addressing Client Unavailability in Federated Learning with Local Update Approximation and Rectification [8.747592727421596]
Federated learning (FL) enables clients to collaboratively train machine learning models under the coordination of a server.
FedAR can get all clients involved in the global model update to achieve a high-quality global model on the server.
FedAR also depicts impressive performance in the presence of a large number of clients with severe client unavailability.
arXiv Detail & Related papers (2024-07-26T21:56:52Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Approximate and Weighted Data Reconstruction Attack in Federated Learning [1.802525429431034]
distributed learning (FL) enables clients to collaborate on building a machine learning model without sharing their private data.
Recent data reconstruction attacks demonstrate that an attacker can recover clients' training data based on the parameters shared in FL.
We propose an approximation method, which makes attacking FedAvg scenarios feasible by generating the intermediate model updates of the clients' local training processes.
arXiv Detail & Related papers (2023-08-13T17:40:56Z) - FedBug: A Bottom-Up Gradual Unfreezing Framework for Federated Learning [36.18217687935658]
Federated Learning (FL) offers a collaborative training framework, allowing multiple clients to contribute to a shared model.
Due to the heterogeneous nature of local datasets, updated client models may overfit and diverge from one another, commonly known as the problem of client drift.
We propose FedBug, a novel FL framework designed to effectively mitigate client drift.
arXiv Detail & Related papers (2023-07-19T05:44:35Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - FedCliP: Federated Learning with Client Pruning [3.796320380104124]
Federated learning (FL) is a newly emerging distributed learning paradigm.
One fundamental bottleneck in FL is the heavy communication overheads between the distributed clients and the central server.
We propose FedCliP, the first communication efficient FL training framework from a macro perspective.
arXiv Detail & Related papers (2023-01-17T09:15:37Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.