The Threat of Offensive AI to Organizations
- URL: http://arxiv.org/abs/2106.15764v1
- Date: Wed, 30 Jun 2021 01:03:28 GMT
- Title: The Threat of Offensive AI to Organizations
- Authors: Yisroel Mirsky, Ambra Demontis, Jaidip Kotak, Ram Shankar, Deng Gelei,
Liu Yang, Xiangyu Zhang, Wenke Lee, Yuval Elovici, Battista Biggio
- Abstract summary: This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
- Score: 52.011307264694665
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: AI has provided us with the ability to automate tasks, extract information
from vast amounts of data, and synthesize media that is nearly
indistinguishable from the real thing. However, positive tools can also be used
for negative purposes. In particular, cyber adversaries can use AI (such as
machine learning) to enhance their attacks and expand their campaigns.
Although offensive AI has been discussed in the past, there is a need to
analyze and understand the threat in the context of organizations. For example,
how does an AI-capable adversary impact the cyber kill chain? Does AI benefit
the attacker more than the defender? What are the most significant AI threats
facing organizations today and what will be their impact on the future?
In this survey, we explore the threat of offensive AI on organizations.
First, we present the background and discuss how AI changes the adversary's
methods, strategies, goals, and overall attack model. Then, through a
literature review, we identify 33 offensive AI capabilities which adversaries
can use to enhance their attacks. Finally, through a user study spanning
industry and academia, we rank the AI threats and provide insights on the
adversaries.
Related papers
- A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - A+AI: Threats to Society, Remedies, and Governance [0.0]
This document focuses on the threats, especially near-term threats, that Artificial Intelligence (AI) brings to society.
It includes a table showing which countermeasures are likely to mitigate which threats.
The paper lists specific actions government should take as soon as possible.
arXiv Detail & Related papers (2024-09-03T18:43:47Z) - Artificial Influence: An Analysis Of AI-Driven Persuasion [0.0]
We warn that ubiquitous highlypersuasive AI systems could alter our information environment so significantly so as to contribute to a loss of human control of our own future.
We conclude that none of these solutions will be airtight, and that individuals and governments will need to take active steps to guard against the most pernicious effects of persuasive AI.
arXiv Detail & Related papers (2023-03-15T16:05:11Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Adversarial Policies Beat Superhuman Go AIs [54.15639517188804]
We attack the state-of-the-art Go-playing AI system KataGo by training adversarial policies against it.
Our adversaries do not win by playing Go well. Instead, they trick KataGo into making serious blunders.
Our results demonstrate that even superhuman AI systems may harbor surprising failure modes.
arXiv Detail & Related papers (2022-11-01T03:13:20Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - "Weak AI" is Likely to Never Become "Strong AI", So What is its Greatest
Value for us? [4.497097230665825]
Many researchers argue that little substantial progress has been made for AI in recent decades.
Author explains why controversies about AI exist; (2) discriminates two paradigms of AI research, termed "weak AI" and "strong AI"
arXiv Detail & Related papers (2021-03-29T02:57:48Z) - Towards AI Forensics: Did the Artificial Intelligence System Do It? [2.5991265608180396]
We focus on AI that is potentially malicious by design'' and grey box analysis.
Our evaluation using convolutional neural networks illustrates challenges and ideas for identifying malicious AI.
arXiv Detail & Related papers (2020-05-27T20:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.