Information Stealing in Federated Learning Systems Based on Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2108.00701v1
- Date: Mon, 2 Aug 2021 08:12:43 GMT
- Title: Information Stealing in Federated Learning Systems Based on Generative
Adversarial Networks
- Authors: Yuwei Sun, Ng Chong, Hideya Ochiai
- Abstract summary: We mounted adversarial attacks on a federated learning (FL) environment using three different datasets.
The attacks leveraged generative adversarial networks (GANs) to affect the learning process.
We reconstructed the real data of the victim from the shared global model parameters with all the applied datasets.
- Score: 0.5156484100374059
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An attack on deep learning systems where intelligent machines collaborate to
solve problems could cause a node in the network to make a mistake on a
critical judgment. At the same time, the security and privacy concerns of AI
have galvanized the attention of experts from multiple disciplines. In this
research, we successfully mounted adversarial attacks on a federated learning
(FL) environment using three different datasets. The attacks leveraged
generative adversarial networks (GANs) to affect the learning process and
strive to reconstruct the private data of users by learning hidden features
from shared local model parameters. The attack was target-oriented drawing data
with distinct class distribution from the CIFAR- 10, MNIST, and Fashion-MNIST
respectively. Moreover, by measuring the Euclidean distance between the real
data and the reconstructed adversarial samples, we evaluated the performance of
the adversary in the learning processes in various scenarios. At last, we
successfully reconstructed the real data of the victim from the shared global
model parameters with all the applied datasets.
Related papers
- FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Decentralized Online Federated G-Network Learning for Lightweight
Intrusion Detection [2.7225008315665424]
This paper proposes a novel Decentralized and Online Federated Learning Intrusion Detection architecture based on the G-Network model with collaborative learning.
The performance evaluation results using public Kitsune and Bot-IoT datasets show that DOF-ID significantly improves the intrusion detection performance in all of the collaborating components.
arXiv Detail & Related papers (2023-06-22T16:46:00Z) - Turning Privacy-preserving Mechanisms against Federated Learning [22.88443008209519]
We design an attack capable of deceiving state-of-the-art defenses for federated learning.
The proposed attack includes two operating modes, the first one focusing on convergence inhibition (Adversarial Mode) and the second one aiming at building a deceptive rating injection on the global federated model (Backdoor Mode)
The experimental results show the effectiveness of our attack in both its modes, returning on average 60% performance detriment in all the tests on Adversarial Mode and fully effective backdoors in 93% of cases for the tests performed on Backdoor Mode.
arXiv Detail & Related papers (2023-05-09T11:43:31Z) - Network-Level Adversaries in Federated Learning [21.222645649379672]
We study the impact of network-level adversaries on training federated learning models.
We show that attackers dropping the network traffic from carefully selected clients can significantly decrease model accuracy on a target population.
We develop a server-side defense which mitigates the impact of our attacks by identifying and up-sampling clients likely to positively contribute towards target accuracy.
arXiv Detail & Related papers (2022-08-27T02:42:04Z) - Adversarial Representation Sharing: A Quantitative and Secure
Collaborative Learning Framework [3.759936323189418]
We find representation learning has unique advantages in collaborative learning due to the lower communication overhead and task-independency.
We present ARS, a collaborative learning framework wherein users share representations of data to train models.
We demonstrate that our mechanism is effective against model inversion attacks, and achieves a balance between privacy and utility.
arXiv Detail & Related papers (2022-03-27T13:29:15Z) - Privacy-Preserving Federated Learning on Partitioned Attributes [6.661716208346423]
Federated learning empowers collaborative training without exposing local data or models.
We introduce an adversarial learning based procedure which tunes a local model to release privacy-preserving intermediate representations.
To alleviate the accuracy decline, we propose a defense method based on the forward-backward splitting algorithm.
arXiv Detail & Related papers (2021-04-29T14:49:14Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Quasi-Global Momentum: Accelerating Decentralized Deep Learning on
Heterogeneous Data [77.88594632644347]
Decentralized training of deep learning models is a key element for enabling data privacy and on-device learning over networks.
In realistic learning scenarios, the presence of heterogeneity across different clients' local datasets poses an optimization challenge.
We propose a novel momentum-based method to mitigate this decentralized training difficulty.
arXiv Detail & Related papers (2021-02-09T11:27:14Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.