Survey on Federated Learning Threats: concepts, taxonomy on attacks and
defences, experimental study and challenges
- URL: http://arxiv.org/abs/2201.08135v1
- Date: Thu, 20 Jan 2022 12:23:03 GMT
- Title: Survey on Federated Learning Threats: concepts, taxonomy on attacks and
defences, experimental study and challenges
- Authors: Nuria Rodr\'iguez-Barroso, Daniel Jim\'enez L\'opez, M. Victoria
Luz\'on, Francisco Herrera and Eugenio Mart\'inez-C\'amara
- Abstract summary: Federated learning is a machine learning paradigm that emerges as a solution to the privacy-preservation demands in artificial intelligence.
As machine learning, federated learning is threatened by adversarial attacks against the integrity of the learning model and the privacy of data via a distributed approach to tackle local and global learning.
- Score: 10.177219272933781
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning is a machine learning paradigm that emerges as a solution
to the privacy-preservation demands in artificial intelligence. As machine
learning, federated learning is threatened by adversarial attacks against the
integrity of the learning model and the privacy of data via a distributed
approach to tackle local and global learning. This weak point is exacerbated by
the inaccessibility of data in federated learning, which makes harder the
protection against adversarial attacks and evidences the need to furtherance
the research on defence methods to make federated learning a real solution for
safeguarding data privacy. In this paper, we present an extensive review of the
threats of federated learning, as well as as their corresponding
countermeasures, attacks versus defences. This survey provides a taxonomy of
adversarial attacks and a taxonomy of defence methods that depict a general
picture of this vulnerability of federated learning and how to overcome it.
Likewise, we expound guidelines for selecting the most adequate defence method
according to the category of the adversarial attack. Besides, we carry out an
extensive experimental study from which we draw further conclusions about the
behaviour of attacks and defences and the guidelines for selecting the most
adequate defence method according to the category of the adversarial attack.
This study is finished leading to meditated learned lessons and challenges.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - GANcrop: A Contrastive Defense Against Backdoor Attacks in Federated Learning [1.9632700283749582]
This paper introduces a novel defense mechanism against backdoor attacks in federated learning, named GANcrop.
Experimental findings demonstrate that GANcrop effectively safeguards against backdoor attacks, particularly in non-IID scenarios.
arXiv Detail & Related papers (2024-05-31T09:33:16Z) - Adversarial Robustness Unhardening via Backdoor Attacks in Federated
Learning [13.12397828096428]
Adversarial Robustness Unhardening (ARU) is employed by a subset of adversaries to intentionally undermine model robustness during decentralized training.
We present empirical experiments evaluating ARU's impact on adversarial training and existing robust aggregation defenses against poisoning and backdoor attacks.
arXiv Detail & Related papers (2023-10-17T21:38:41Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Where Did You Learn That From? Surprising Effectiveness of Membership
Inference Attacks Against Temporally Correlated Data in Deep Reinforcement
Learning [114.9857000195174]
A major challenge to widespread industrial adoption of deep reinforcement learning is the potential vulnerability to privacy breaches.
We propose an adversarial attack framework tailored for testing the vulnerability of deep reinforcement learning algorithms to membership inference attacks.
arXiv Detail & Related papers (2021-09-08T23:44:57Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.