Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML
Systems
- URL: http://arxiv.org/abs/2202.09465v1
- Date: Fri, 18 Feb 2022 22:54:04 GMT
- Title: Attacks, Defenses, And Tools: A Framework To Facilitate Robust AI/ML
Systems
- Authors: Mohamad Fazelnia, Igor Khokhlov, Mehdi Mirakhorli
- Abstract summary: Software systems are increasingly relying on Artificial Intelligence (AI) and Machine Learning (ML) components.
This paper presents a framework to characterize attacks and weaknesses associated with AI-enabled systems.
- Score: 2.5137859989323528
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Software systems are increasingly relying on Artificial Intelligence (AI) and
Machine Learning (ML) components. The emerging popularity of AI techniques in
various application domains attracts malicious actors and adversaries.
Therefore, the developers of AI-enabled software systems need to take into
account various novel cyber-attacks and vulnerabilities that these systems may
be susceptible to. This paper presents a framework to characterize attacks and
weaknesses associated with AI-enabled systems and provide mitigation techniques
and defense strategies. This framework aims to support software designers in
taking proactive measures in developing AI-enabled software, understanding the
attack surface of such systems, and developing products that are resilient to
various emerging attacks associated with ML. The developed framework covers a
broad spectrum of attacks, mitigation techniques, and defensive and offensive
tools. In this paper, we demonstrate the framework architecture and its major
components, describe their attributes, and discuss the long-term goals of this
research.
Related papers
- Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI
Models [0.0]
A need to identify system vulnerabilities, potential threats, characterize properties that will enhance system robustness.
A secondary need is to share this AI security threat intelligence between different stakeholders like, model developers, users, and AI/ML security professionals.
In this paper, we create and describe a prototype system CTI4AI, to overcome the need to methodically identify and share AI/ML specific vulnerabilities and threat intelligence.
arXiv Detail & Related papers (2022-08-16T00:16:58Z) - Enabling Automated Machine Learning for Model-Driven AI Engineering [60.09869520679979]
We propose a novel approach to enable Model-Driven Software Engineering and Model-Driven AI Engineering.
In particular, we support Automated ML, thus assisting software engineers without deep AI knowledge in developing AI-intensive systems.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - Artificial Intelligence-Based Smart Grid Vulnerabilities and Potential
Solutions for Fake-Normal Attacks: A Short Review [0.0]
Smart grid systems are critical to the power industry, however their sophisticated architectural design and operations expose them to a number of cybersecurity threats.
Artificial Intelligence (AI)-based technologies are becoming increasingly popular for detecting cyber assaults in a variety of computer settings.
The present AI systems are being exposed and vanquished because of the recent emergence of sophisticated adversarial systems such as Generative Adversarial Networks (GAN)
arXiv Detail & Related papers (2022-02-14T21:41:36Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - Technology Readiness Levels for Machine Learning Systems [107.56979560568232]
Development and deployment of machine learning systems can be executed easily with modern tools, but the process is typically rushed and means-to-an-end.
We have developed a proven systems engineering approach for machine learning development and deployment.
Our "Machine Learning Technology Readiness Levels" framework defines a principled process to ensure robust, reliable, and responsible systems.
arXiv Detail & Related papers (2021-01-11T15:54:48Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z) - Vulnerabilities of Connectionist AI Applications: Evaluation and Defence [0.0]
This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity.
A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature.
The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains.
arXiv Detail & Related papers (2020-03-18T12:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.