Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
- URL: http://arxiv.org/abs/2003.08837v1
- Date: Wed, 18 Mar 2020 12:33:59 GMT
- Title: Vulnerabilities of Connectionist AI Applications: Evaluation and Defence
- Authors: Christian Berghoff and Matthias Neu and Arndt von Twickel
- Abstract summary: This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity.
A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature.
The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This article deals with the IT security of connectionist artificial
intelligence (AI) applications, focusing on threats to integrity, one of the
three IT security goals. Such threats are for instance most relevant in
prominent AI computer vision applications. In order to present a holistic view
on the IT security goal integrity, many additional aspects such as
interpretability, robustness and documentation are taken into account. A
comprehensive list of threats and possible mitigations is presented by
reviewing the state-of-the-art literature. AI-specific vulnerabilities such as
adversarial attacks and poisoning attacks as well as their AI-specific root
causes are discussed in detail. Additionally and in contrast to former reviews,
the whole AI supply chain is analysed with respect to vulnerabilities,
including the planning, data acquisition, training, evaluation and operation
phases. The discussion of mitigations is likewise not restricted to the level
of the AI system itself but rather advocates viewing AI systems in the context
of their supply chains and their embeddings in larger IT infrastructures and
hardware devices. Based on this and the observation that adaptive attackers may
circumvent any single published AI-specific defence to date, the article
concludes that single protective measures are not sufficient but rather
multiple measures on different levels have to be combined to achieve a minimum
level of IT security for AI applications.
Related papers
- Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - AI Agents Under Threat: A Survey of Key Security Challenges and Future Pathways [10.16690494897609]
An Artificial Intelligence (AI) agent is a software entity that autonomously performs tasks or makes decisions based on pre-defined objectives and data inputs.
This survey delves into the emerging security threats faced by AI agents, categorizing them into four critical knowledge gaps.
By systematically reviewing these threats, this paper highlights both the progress made and the existing limitations in safeguarding AI agents.
arXiv Detail & Related papers (2024-06-04T01:22:31Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - The AI Security Pyramid of Pain [0.18820558426635298]
We introduce the AI Security Pyramid of Pain, a framework that adapts the cybersecurity Pyramid of Pain to categorize and prioritize AI-specific threats.
This framework provides a structured approach to understanding and addressing various levels of AI threats.
arXiv Detail & Related papers (2024-02-16T21:14:11Z) - Autonomous Threat Hunting: A Future Paradigm for AI-Driven Threat Intelligence [0.0]
Review explores the amalgamation of artificial intelligence (AI) and traditional threat intelligence methodologies.
Examines the transformative influence of AI and machine learning on conventional threat intelligence practices.
Case studies and evaluations highlight success stories and lessons learned by organizations adopting AI-driven threat intelligence.
arXiv Detail & Related papers (2023-12-30T17:36:08Z) - Towards Automated Classification of Attackers' TTPs by combining NLP
with ML Techniques [77.34726150561087]
We evaluate and compare different Natural Language Processing (NLP) and machine learning techniques used for security information extraction in research.
Based on our investigations we propose a data processing pipeline that automatically classifies unstructured text according to attackers' tactics and techniques.
arXiv Detail & Related papers (2022-07-18T09:59:21Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z) - A System for Automated Open-Source Threat Intelligence Gathering and
Management [53.65687495231605]
SecurityKG is a system for automated OSCTI gathering and management.
It uses a combination of AI and NLP techniques to extract high-fidelity knowledge about threat behaviors.
arXiv Detail & Related papers (2021-01-19T18:31:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.