When to Trust AI: Advances and Challenges for Certification of Neural
Networks
- URL: http://arxiv.org/abs/2309.11196v1
- Date: Wed, 20 Sep 2023 10:31:09 GMT
- Title: When to Trust AI: Advances and Challenges for Certification of Neural
Networks
- Authors: Marta Kwiatkowska, Xiyue Zhang
- Abstract summary: Early adoption of AI technology for real-world applications has not been without problems.
This paper provides an overview of techniques that have been developed to ensure safety of AI decisions.
- Score: 26.890905486708117
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) has been advancing at a fast pace and it is now
poised for deployment in a wide range of applications, such as autonomous
systems, medical diagnosis and natural language processing. Early adoption of
AI technology for real-world applications has not been without problems,
particularly for neural networks, which may be unstable and susceptible to
adversarial examples. In the longer term, appropriate safety assurance
techniques need to be developed to reduce potential harm due to avoidable
system failures and ensure trustworthiness. Focusing on certification and
explainability, this paper provides an overview of techniques that have been
developed to ensure safety of AI decisions and discusses future challenges.
Related papers
- Establishing Minimum Elements for Effective Vulnerability Management in AI Software [4.067778725390327]
This paper discusses the minimum elements for AI vulnerability management and the establishment of an Artificial Intelligence Vulnerability Database (AIVD)
It presents standardized formats and protocols for disclosing, analyzing, cataloging, and documenting AI vulnerabilities.
arXiv Detail & Related papers (2024-11-18T06:22:20Z) - A Trilogy of AI Safety Frameworks: Paths from Facts and Knowledge Gaps to Reliable Predictions and New Knowledge [0.0]
AI Safety has become a vital front-line concern of many scientists within and outside the AI community.
There are many immediate and long term anticipated risks that range from existential risk to human existence to deep fakes and bias in machine learning systems.
arXiv Detail & Related papers (2024-10-09T14:43:06Z) - Trustworthy, Responsible, and Safe AI: A Comprehensive Architectural Framework for AI Safety with Challenges and Mitigations [14.150792596344674]
AI Safety is an emerging area of critical importance to the safe adoption and deployment of AI systems.
Our goal is to promote advancement in AI safety research, and ultimately enhance people's trust in digital transformation.
arXiv Detail & Related papers (2024-08-23T09:33:48Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Causal Reasoning: Charting a Revolutionary Course for Next-Generation
AI-Native Wireless Networks [63.246437631458356]
Next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native.
This article introduces a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning.
We highlight several wireless networking challenges that can be addressed by causal discovery and representation.
arXiv Detail & Related papers (2023-09-23T00:05:39Z) - A Survey on Explainable Artificial Intelligence for Cybersecurity [14.648580959079787]
Explainable Artificial Intelligence (XAI) aims to create machine learning models that can provide clear and interpretable explanations for their decisions and actions.
In the field of network cybersecurity, XAI has the potential to revolutionize the way we approach network security by enabling us to better understand the behavior of cyber threats.
arXiv Detail & Related papers (2023-03-07T22:54:18Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Proceedings of the Artificial Intelligence for Cyber Security (AICS)
Workshop at AAAI 2022 [55.573187938617636]
The workshop will focus on the application of AI to problems in cyber security.
Cyber systems generate large volumes of data, utilizing this effectively is beyond human capabilities.
arXiv Detail & Related papers (2022-02-28T18:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.