Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and
Legal Implications
- URL: http://arxiv.org/abs/2305.14553v1
- Date: Tue, 23 May 2023 22:27:53 GMT
- Title: Adversarial Machine Learning and Cybersecurity: Risks, Challenges, and
Legal Implications
- Authors: Micah Musser, Andrew Lohn, James X. Dempsey, Jonathan Spring, Ram
Shankar Siva Kumar, Brenda Leong, Christina Liaghati, Cindy Martinez, Crystal
D. Grant, Daniel Rohrer, Heather Frase, Jonathan Elliott, John Bansemer,
Mikel Rodriguez, Mitt Regan, Rumman Chowdhury, Stefan Hermanek
- Abstract summary: In July 2022, the Center for Security and Emerging Technology at Georgetown University and the Program on Geopolitics, Technology, and Governance at the Stanford Cyber Policy Center convened a workshop of experts to examine the relationship between vulnerabilities in artificial intelligence systems and more traditional types of software vulnerabilities.
Topics discussed included the extent to which AI vulnerabilities can be handled under standard cybersecurity processes, the barriers currently preventing the accurate sharing of information about AI vulnerabilities, legal issues associated with adversarial attacks on AI systems, and potential areas where government support could improve AI vulnerability management and mitigation.
- Score: 0.4665186371356556
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In July 2022, the Center for Security and Emerging Technology (CSET) at
Georgetown University and the Program on Geopolitics, Technology, and
Governance at the Stanford Cyber Policy Center convened a workshop of experts
to examine the relationship between vulnerabilities in artificial intelligence
systems and more traditional types of software vulnerabilities. Topics
discussed included the extent to which AI vulnerabilities can be handled under
standard cybersecurity processes, the barriers currently preventing the
accurate sharing of information about AI vulnerabilities, legal issues
associated with adversarial attacks on AI systems, and potential areas where
government support could improve AI vulnerability management and mitigation.
This report is meant to accomplish two things. First, it provides a
high-level discussion of AI vulnerabilities, including the ways in which they
are disanalogous to other types of vulnerabilities, and the current state of
affairs regarding information sharing and legal oversight of AI
vulnerabilities. Second, it attempts to articulate broad recommendations as
endorsed by the majority of participants at the workshop.
Related papers
- Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - The Security and Privacy of Mobile Edge Computing: An Artificial Intelligence Perspective [64.36680481458868]
Mobile Edge Computing (MEC) is a new computing paradigm that enables cloud computing and information technology (IT) services to be delivered at the network's edge.
This paper provides a survey of security and privacy in MEC from the perspective of Artificial Intelligence (AI)
We focus on new security and privacy issues, as well as potential solutions from the viewpoints of AI.
arXiv Detail & Related papers (2024-01-03T07:47:22Z) - Artificial Intelligence Ethics Education in Cybersecurity: Challenges
and Opportunities: a focus group report [10.547686057159309]
The emergence of AI tools in cybersecurity creates many opportunities and uncertainties.
Confronting the "black box" mentality in AI cybersecurity work is also of the greatest importance.
Future AI educators and practitioners need to address these issues by implementing rigorous technical training curricula.
arXiv Detail & Related papers (2023-11-02T00:08:07Z) - ExTRUST: Reducing Exploit Stockpiles with a Privacy-Preserving Depletion
System for Inter-State Relationships [4.349142920611964]
This paper proposes a privacy-preserving approach that allows multiple state parties to privately compare their stock of vulnerabilities and exploits.
We call our system Extrust and show that it is scalable and can withstand several attack scenarios.
arXiv Detail & Related papers (2023-06-01T12:02:17Z) - Explainable Artificial Intelligence and Cybersecurity: A Systematic
Literature Review [0.799536002595393]
XAI aims to make the operation of AI algorithms more interpretable for its users and developers.
This work seeks to investigate the current research scenario on XAI applied to cybersecurity.
arXiv Detail & Related papers (2023-02-27T17:47:56Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z) - Dos and Don'ts of Machine Learning in Computer Security [74.1816306998445]
Despite great potential, machine learning in security is prone to subtle pitfalls that undermine its performance.
We identify common pitfalls in the design, implementation, and evaluation of learning-based security systems.
We propose actionable recommendations to support researchers in avoiding or mitigating the pitfalls where possible.
arXiv Detail & Related papers (2020-10-19T13:09:31Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - Vulnerabilities of Connectionist AI Applications: Evaluation and Defence [0.0]
This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity.
A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature.
The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains.
arXiv Detail & Related papers (2020-03-18T12:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.