New Challenges in Reinforcement Learning: A Survey of Security and
Privacy
- URL: http://arxiv.org/abs/2301.00188v1
- Date: Sat, 31 Dec 2022 12:30:43 GMT
- Title: New Challenges in Reinforcement Learning: A Survey of Security and
Privacy
- Authors: Yunjiao Lei, Dayong Ye, Sheng Shen, Yulei Sui, Tianqing Zhu, Wanlei
Zhou
- Abstract summary: Reinforcement learning (RL) is one of the most important branches of AI.
RL has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics.
Some of these applications and systems have been shown to be vulnerable to security or privacy attacks.
- Score: 26.706957408693363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) is one of the most important branches of AI. Due
to its capacity for self-adaption and decision-making in dynamic environments,
reinforcement learning has been widely applied in multiple areas, such as
healthcare, data markets, autonomous driving, and robotics. However, some of
these applications and systems have been shown to be vulnerable to security or
privacy attacks, resulting in unreliable or unstable services. A large number
of studies have focused on these security and privacy problems in reinforcement
learning. However, few surveys have provided a systematic review and comparison
of existing problems and state-of-the-art solutions to keep up with the pace of
emerging threats. Accordingly, we herein present such a comprehensive review to
explain and summarize the challenges associated with security and privacy in
reinforcement learning from a new perspective, namely that of the Markov
Decision Process (MDP). In this survey, we first introduce the key concepts
related to this area. Next, we cover the security and privacy issues linked to
the state, action, environment, and reward function of the MDP process,
respectively. We further highlight the special characteristics of security and
privacy methodologies related to reinforcement learning. Finally, we discuss
the possible future research directions within this area.
Related papers
- Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - Threats, Attacks, and Defenses in Machine Unlearning: A Survey [14.03428437751312]
Machine Unlearning (MU) has recently gained considerable attention due to its potential to achieve Safe AI.
This survey aims to fill the gap between the extensive number of studies on threats, attacks, and defenses in machine unlearning.
arXiv Detail & Related papers (2024-03-20T15:40:18Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - A Survey of Federated Unlearning: A Taxonomy, Challenges and Future
Directions [71.16718184611673]
The evolution of privacy-preserving Federated Learning (FL) has led to an increasing demand for implementing the right to be forgotten.
The implementation of selective forgetting is particularly challenging in FL due to its decentralized nature.
Federated Unlearning (FU) emerges as a strategic solution to address the increasing need for data privacy.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Systemization of Knowledge (SoK)- Cross Impact of Transfer Learning in Cybersecurity: Offensive, Defensive and Threat Intelligence Perspectives [25.181087776375914]
This paper presents a comprehensive survey of transfer learning applications in cybersecurity.
The survey highlights the significance of transfer learning in addressing critical issues in cybersecurity.
The paper identifies future research directions and challenges that require community attention.
arXiv Detail & Related papers (2023-09-12T00:26:38Z) - Inspect, Understand, Overcome: A Survey of Practical Methods for AI
Safety [54.478842696269304]
The use of deep neural networks (DNNs) in safety-critical applications is challenging due to numerous model-inherent shortcomings.
In recent years, a zoo of state-of-the-art techniques aiming to address these safety concerns has emerged.
Our paper addresses both machine learning experts and safety engineers.
arXiv Detail & Related papers (2021-04-29T09:54:54Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Machine Learning (In) Security: A Stream of Problems [17.471312325933244]
We identify, detail, and discuss the main challenges in the correct application of Machine Learning techniques to cybersecurity data.
We evaluate how concept drift, evolution, delayed labels, and adversarial ML impact the existing solutions.
We present how existing solutions may fail under certain circumstances, and propose mitigations to them.
arXiv Detail & Related papers (2020-10-30T03:40:10Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.