A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy
- URL: http://arxiv.org/abs/2302.10637v1
- Date: Tue, 21 Feb 2023 12:52:12 GMT
- Title: A Survey of Trustworthy Federated Learning with Perspectives on
Security, Robustness, and Privacy
- Authors: Yifei Zhang, Dun Zeng, Jinglong Luo, Zenglin Xu, Irwin King
- Abstract summary: Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios.
However, challenges around data isolation and privacy threaten the trustworthiness of FL systems.
- Score: 47.89042524852868
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Trustworthy artificial intelligence (AI) technology has revolutionized daily
life and greatly benefited human society. Among various AI technologies,
Federated Learning (FL) stands out as a promising solution for diverse
real-world scenarios, ranging from risk evaluation systems in finance to
cutting-edge technologies like drug discovery in life sciences. However,
challenges around data isolation and privacy threaten the trustworthiness of FL
systems. Adversarial attacks against data privacy, learning algorithm
stability, and system confidentiality are particularly concerning in the
context of distributed training in federated learning. Therefore, it is crucial
to develop FL in a trustworthy manner, with a focus on security, robustness,
and privacy. In this survey, we propose a comprehensive roadmap for developing
trustworthy FL systems and summarize existing efforts from three key aspects:
security, robustness, and privacy. We outline the threats that pose
vulnerabilities to trustworthy federated learning across different stages of
development, including data processing, model training, and deployment. To
guide the selection of the most appropriate defense methods, we discuss
specific technical solutions for realizing each aspect of Trustworthy FL (TFL).
Our approach differs from previous work that primarily discusses TFL from a
legal perspective or presents FL from a high-level, non-technical viewpoint.
Related papers
- FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.
Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.
This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - Trustworthy Federated Learning: Privacy, Security, and Beyond [37.495790989584584]
Federated Learning (FL) addresses concerns by facilitating collaborative model training across distributed data sources without transferring raw data.
We conduct an extensive survey of the security and privacy issues prevalent in FL, underscoring the vulnerability of communication links and the potential for cyber threats.
We identify the intricate security challenges that arise within the FL frameworks, aiming to contribute to the development of secure and efficient FL systems.
arXiv Detail & Related papers (2024-11-03T14:18:01Z) - Privacy in Federated Learning [0.0]
Federated Learning (FL) represents a significant advancement in distributed machine learning.
This chapter delves into the core privacy concerns within FL, including the risks of data reconstruction, model inversion attacks, and membership inference.
It examines the trade-offs between model accuracy and privacy, emphasizing the importance of balancing these factors in practical implementations.
arXiv Detail & Related papers (2024-08-12T18:41:58Z) - Federated Learning with New Knowledge: Fundamentals, Advances, and
Futures [69.8830772538421]
This paper systematically defines the main sources of new knowledge in Federated Learning (FL)
We examine the impact of the form and timing of new knowledge arrival on the incorporation process.
We discuss the potential future directions for FL with new knowledge, considering a variety of factors such as scenario setups, efficiency, and security.
arXiv Detail & Related papers (2024-02-03T21:29:31Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Security and Privacy Issues of Federated Learning [0.0]
Federated Learning (FL) has emerged as a promising approach to address data privacy and confidentiality concerns.
This paper presents a comprehensive taxonomy of security and privacy challenges in Federated Learning (FL) across various machine learning models.
arXiv Detail & Related papers (2023-07-22T22:51:07Z) - Trustworthy Federated Learning: A Survey [0.5089078998562185]
Federated Learning (FL) has emerged as a significant advancement in the field of Artificial Intelligence (AI)
We provide an extensive overview of the current state of Trustworthy FL, exploring existing solutions and well-defined pillars relevant to Trustworthy.
We propose a taxonomy that encompasses three main pillars: Interpretability, Fairness, and Security & Privacy.
arXiv Detail & Related papers (2023-05-19T09:11:26Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.