AI Product Security: A Primer for Developers
- URL: http://arxiv.org/abs/2304.11087v1
- Date: Tue, 18 Apr 2023 05:22:34 GMT
- Title: AI Product Security: A Primer for Developers
- Authors: Ebenezer R. H. P. Isaac and Jim Reno
- Abstract summary: It is imperative to understand the threats to machine learning products and avoid common pitfalls in AI product development.
This article is addressed to developers, designers, managers and researchers of AI software products.
- Score: 0.685316573653194
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Not too long ago, AI security used to mean the research and practice of how
AI can empower cybersecurity, that is, AI for security. Ever since Ian
Goodfellow and his team popularized adversarial attacks on machine learning,
security for AI became an important concern and also part of AI security. It is
imperative to understand the threats to machine learning products and avoid
common pitfalls in AI product development. This article is addressed to
developers, designers, managers and researchers of AI software products.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Trustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations [14.150792596344674]
AI Safety is an emerging area of critical importance to the safe adoption and deployment of AI systems.
Our goal is to promote advancement in AI safety research, and ultimately enhance people's trust in digital transformation.
arXiv Detail & Related papers (2024-08-23T09:33:48Z) - Using AI Assistants in Software Development: A Qualitative Study on Security Practices and Concerns [23.867795468379743]
Recent research has demonstrated that AI-generated code can contain security issues.
How software professionals balance AI assistant usage and security remains unclear.
This paper investigates how software professionals use AI assistants in secure software development.
arXiv Detail & Related papers (2024-05-10T10:13:19Z) - Artificial Intelligence as the New Hacker: Developing Agents for Offensive Security [0.0]
This paper explores the integration of Artificial Intelligence (AI) into offensive cybersecurity.
It develops an autonomous AI agent, ReaperAI, designed to simulate and execute cyberattacks.
ReaperAI demonstrates the potential to identify, exploit, and analyze security vulnerabilities autonomously.
arXiv Detail & Related papers (2024-05-09T18:15:12Z) - AI Safety: Necessary, but insufficient and possibly problematic [1.6797508081737678]
This article critically examines the recent hype around AI safety.
We consider what 'AI safety' actually means, and outline the dominant concepts that the digital footprint of AI safety aligns with.
We share our concerns on how AI safety may normalize AI that advances structural harm through providing exploitative and harmful AI with a veneer of safety.
arXiv Detail & Related papers (2024-03-26T06:18:42Z) - A Red Teaming Framework for Securing AI in Maritime Autonomous Systems [0.0]
We propose one of the first red team frameworks for evaluating the AI security of maritime autonomous systems.
This framework is a multi-part checklist, which can be tailored to different systems and requirements.
We demonstrate this framework to be highly effective for a red team to use to uncover numerous vulnerabilities within a real-world maritime autonomous systems AI.
arXiv Detail & Related papers (2023-12-08T14:59:07Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.