Advancing the Research and Development of Assured Artificial
Intelligence and Machine Learning Capabilities
- URL: http://arxiv.org/abs/2009.13250v1
- Date: Thu, 24 Sep 2020 20:12:14 GMT
- Title: Advancing the Research and Development of Assured Artificial
Intelligence and Machine Learning Capabilities
- Authors: Tyler J. Shipp, Daniel J. Clouse, Michael J. De Lucia, Metin B.
Ahiskali, Kai Steverson, Jonathan M. Mullin, Nathaniel D. Bastian
- Abstract summary: An adversarial AI (A2I) and adversarial ML (AML) attack seeks to deceive and manipulate AI/ML models.
It is imperative that AI/ML models can defend against these attacks.
The A2I Working Group (A2IWG) seeks to advance the research and development of assured AI/ML capabilities.
- Score: 2.688723831634804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) and machine learning (ML) have become
increasingly vital in the development of novel defense and intelligence
capabilities across all domains of warfare. An adversarial AI (A2I) and
adversarial ML (AML) attack seeks to deceive and manipulate AI/ML models. It is
imperative that AI/ML models can defend against these attacks. A2I/AML defenses
will help provide the necessary assurance of these advanced capabilities that
use AI/ML models. The A2I Working Group (A2IWG) seeks to advance the research
and development of assured AI/ML capabilities via new A2I/AML defenses by
fostering a collaborative environment across the U.S. Department of Defense and
U.S. Intelligence Community. The A2IWG aims to identify specific challenges
that it can help solve or address more directly, with initial focus on three
topics: AI Trusted Robustness, AI System Security, and AI/ML Architecture
Vulnerabilities.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - A Survey on Offensive AI Within Cybersecurity [1.8206461789819075]
This survey paper on offensive AI will comprehensively cover various aspects related to attacks against and using AI systems.
It will delve into the impact of offensive AI practices on different domains, including consumer, enterprise, and public digital infrastructure.
The paper will explore adversarial machine learning, attacks against AI models, infrastructure, and interfaces, along with offensive techniques like information gathering, social engineering, and weaponized AI.
arXiv Detail & Related papers (2024-09-26T17:36:22Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Killer Apps: Low-Speed, Large-Scale AI Weapons [2.2899177316144943]
Artificial Intelligence (AI) and Machine Learning (ML) advancements present new challenges and opportunities in warfare and security.
This paper explores the concept of AI weapons, their deployment, detection, and potential countermeasures.
arXiv Detail & Related papers (2024-01-14T12:09:40Z) - A Red Teaming Framework for Securing AI in Maritime Autonomous Systems [0.0]
We propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems.
This framework is a multi-part checklist, which can be tailored to different systems and requirements.
We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI.
arXiv Detail & Related papers (2023-12-08T14:59:07Z) - The Rise and Potential of Large Language Model Based Agents: A Survey [91.71061158000953]
Large language models (LLMs) are regarded as potential sparks for Artificial General Intelligence (AGI)
We start by tracing the concept of agents from its philosophical origins to its development in AI, and explain why LLMs are suitable foundations for agents.
We explore the extensive applications of LLM-based agents in three aspects: single-agent scenarios, multi-agent scenarios, and human-agent cooperation.
arXiv Detail & Related papers (2023-09-14T17:12:03Z) - CTI4AI: Threat Intelligence Generation and Sharing after Red Teaming AI
Models [0.0]
A need to identify system vulnerabilities, potential threats, characterize properties that will enhance system robustness.
A secondary need is to share this AI security threat intelligence between different stakeholders like, model developers, users, and AI/ML security professionals.
In this paper, we create and describe a prototype system CTI4AI, to overcome the need to methodically identify and share AI/ML specific vulnerabilities and threat intelligence.
arXiv Detail & Related papers (2022-08-16T00:16:58Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Security and Privacy for Artificial Intelligence: Opportunities and
Challenges [11.368470074697747]
In recent years, most AI models are vulnerable to advanced and sophisticated hacking techniques.
This challenge has motivated concerted research efforts into adversarial AI.
We present a holistic cyber security review that demonstrates adversarial attacks against AI applications.
arXiv Detail & Related papers (2021-02-09T06:06:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.