A Case for Competent AI Systems $-$ A Concept Note
- URL: http://arxiv.org/abs/2312.00052v2
- Date: Thu, 7 Dec 2023 08:12:54 GMT
- Title: A Case for Competent AI Systems $-$ A Concept Note
- Authors: Kamalakar Karlapalem
- Abstract summary: This note explores the concept of capability within AI systems, representing what the system is expected to deliver.
The achievement of this capability may be hindered by deficiencies in implementation and testing.
A central challenge arises in elucidating the competency of an AI system to execute tasks effectively.
- Score: 0.3626013617212666
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The efficiency of an AI system is contingent upon its ability to align with
the specified requirements of a given task. How-ever, the inherent complexity
of tasks often introduces the potential for harmful implications or adverse
actions. This note explores the critical concept of capability within AI
systems, representing what the system is expected to deliver. The articulation
of capability involves specifying well-defined out-comes. Yet, the achievement
of this capability may be hindered by deficiencies in implementation and
testing, reflecting a gap in the system's competency (what it can do vs. what
it does successfully).
A central challenge arises in elucidating the competency of an AI system to
execute tasks effectively. The exploration of system competency in AI remains
in its early stages, occasionally manifesting as confidence intervals denoting
the probability of success. Trust in an AI system hinges on the explicit
modeling and detailed specification of its competency, connected intricately to
the system's capability. This note explores this gap by proposing a framework
for articulating the competency of AI systems.
Motivated by practical scenarios such as the Glass Door problem, where an
individual inadvertently encounters a glass obstacle due to a failure in their
competency, this research underscores the imperative of delving into competency
dynamics. Bridging the gap between capability and competency at a detailed
level, this note contributes to advancing the discourse on bolstering the
reliability of AI systems in real-world applications.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Trust or Bust: Ensuring Trustworthiness in Autonomous Weapon Systems [0.0]
This paper explores the multifaceted nature of trust in Autonomous Weapon Systems (AWS)
It highlights the necessity of establishing reliable and transparent systems to mitigate risks associated with bias, operational failures, and accountability.
It advocates for a collaborative approach that includes technologists, ethicists, and military strategists to address these ongoing challenges.
arXiv Detail & Related papers (2024-10-14T08:36:06Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Proceedings of the Robust Artificial Intelligence System Assurance
(RAISA) Workshop 2022 [0.0]
The RAISA workshop will focus on research, development and application of robust artificial intelligence (AI) and machine learning (ML) systems.
Rather than studying robustness with respect to particular ML algorithms, our approach will be to explore robustness assurance at the system architecture level.
arXiv Detail & Related papers (2022-02-10T01:15:50Z) - Scope and Sense of Explainability for AI-Systems [0.0]
Emphasis will be given to difficulties related to the explainability of highly complex and efficient AI systems.
It will be elaborated on arguments supporting the notion that if AI-solutions were to be discarded in advance because of their not being thoroughly comprehensible, a great deal of the potentiality of intelligent systems would be wasted.
arXiv Detail & Related papers (2021-12-20T14:25:05Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Estimating the Brittleness of AI: Safety Integrity Levels and the Need
for Testing Out-Of-Distribution Performance [0.0]
Test, Evaluation, Verification, and Validation for Artificial Intelligence (AI) is a challenge that threatens to limit the economic and societal rewards that AI researchers have devoted themselves to producing.
This paper argues that neither of those criteria are certain of Deep Neural Networks.
arXiv Detail & Related papers (2020-09-02T03:33:40Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.