New Challenges in Reinforcement Learning: A Survey of Security and
Privacy
- URL: http://arxiv.org/abs/2301.00188v1
- Date: Sat, 31 Dec 2022 12:30:43 GMT
- Title: New Challenges in Reinforcement Learning: A Survey of Security and
Privacy
- Authors: Yunjiao Lei, Dayong Ye, Sheng Shen, Yulei Sui, Tianqing Zhu, Wanlei
Zhou
- Abstract summary: Reinforcement learning (RL) is one of the most important branches of AI.
RL has been widely applied in multiple areas, such as healthcare, data markets, autonomous driving, and robotics.
Some of these applications and systems have been shown to be vulnerable to security or privacy attacks.
- Score: 26.706957408693363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reinforcement learning (RL) is one of the most important branches of AI. Due
to its capacity for self-adaption and decision-making in dynamic environments,
reinforcement learning has been widely applied in multiple areas, such as
healthcare, data markets, autonomous driving, and robotics. However, some of
these applications and systems have been shown to be vulnerable to security or
privacy attacks, resulting in unreliable or unstable services. A large number
of studies have focused on these security and privacy problems in reinforcement
learning. However, few surveys have provided a systematic review and comparison
of existing problems and state-of-the-art solutions to keep up with the pace of
emerging threats. Accordingly, we herein present such a comprehensive review to
explain and summarize the challenges associated with security and privacy in
reinforcement learning from a new perspective, namely that of the Markov
Decision Process (MDP). In this survey, we first introduce the key concepts
related to this area. Next, we cover the security and privacy issues linked to
the state, action, environment, and reward function of the MDP process,
respectively. We further highlight the special characteristics of security and
privacy methodologies related to reinforcement learning. Finally, we discuss
the possible future research directions within this area.
Related papers
- SoK: Unifying Cybersecurity and Cybersafety of Multimodal Foundation Models with an Information Theory Approach [58.93030774141753]
Multimodal foundation models (MFMs) represent a significant advancement in artificial intelligence.
This paper conceptualizes cybersafety and cybersecurity in the context of multimodal learning.
We present a comprehensive Systematization of Knowledge (SoK) to unify these concepts in MFMs, identifying key threats.
arXiv Detail & Related papers (2024-11-17T23:06:20Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - Linkage on Security, Privacy and Fairness in Federated Learning: New Balances and New Perspectives [48.48294460952039]
This survey offers comprehensive descriptions of the privacy, security, and fairness issues in federated learning.
We contend that there exists a trade-off between privacy and fairness and between security and sharing.
arXiv Detail & Related papers (2024-06-16T10:31:45Z) - A Survey on Machine Unlearning: Techniques and New Emerged Privacy Risks [42.3024294376025]
Machine unlearning is a research hotspot in the field of privacy protection.
Recent researchers have found potential privacy leakages of various of machine unlearning approaches.
We analyze privacy risks in various aspects, including definitions, implementation methods, and real-world applications.
arXiv Detail & Related papers (2024-06-10T11:31:04Z) - Threats, Attacks, and Defenses in Machine Unlearning: A Survey [14.03428437751312]
Machine Unlearning (MU) has recently gained considerable attention due to its potential to achieve Safe AI.
This survey aims to fill the gap between the extensive number of studies on threats, attacks, and defenses in machine unlearning.
arXiv Detail & Related papers (2024-03-20T15:40:18Z) - Systemization of Knowledge (SoK)- Cross Impact of Transfer Learning in Cybersecurity: Offensive, Defensive and Threat Intelligence Perspectives [25.181087776375914]
This paper presents a comprehensive survey of transfer learning applications in cybersecurity.
The survey highlights the significance of transfer learning in addressing critical issues in cybersecurity.
The paper identifies future research directions and challenges that require community attention.
arXiv Detail & Related papers (2023-09-12T00:26:38Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Machine Learning (In) Security: A Stream of Problems [17.471312325933244]
We identify, detail, and discuss the main challenges in the correct application of Machine Learning techniques to cybersecurity data.
We evaluate how concept drift, evolution, delayed labels, and adversarial ML impact the existing solutions.
We present how existing solutions may fail under certain circumstances, and propose mitigations to them.
arXiv Detail & Related papers (2020-10-30T03:40:10Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.