Explainability for Embedding AI: Aspirations and Actuality
- URL: http://arxiv.org/abs/2504.14631v1
- Date: Sun, 20 Apr 2025 14:20:01 GMT
- Title: Explainability for Embedding AI: Aspirations and Actuality
- Authors: Thomas Weber,
- Abstract summary: Explainable AI (XAI) may allow developers to understand better the systems they build.<n>Existing XAI systems still fall short of this aspiration.<n>We see an unmet need to provide developers with adequate support mechanisms to cope with this complexity.
- Score: 1.8130068086063336
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With artificial intelligence (AI) embedded in many everyday software systems, effectively and reliably developing and maintaining AI systems becomes an essential skill for software developers. However, the complexity inherent to AI poses new challenges. Explainable AI (XAI) may allow developers to understand better the systems they build, which, in turn, can help with tasks like debugging. In this paper, we report insights from a series of surveys with software developers that highlight that there is indeed an increased need for explanatory tools to support developers in creating AI systems. However, the feedback also indicates that existing XAI systems still fall short of this aspiration. Thus, we see an unmet need to provide developers with adequate support mechanisms to cope with this complexity so they can embed AI into high-quality software in the future.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Cognition is All You Need -- The Next Layer of AI Above Large Language
Models [0.0]
We present Cognitive AI, a framework for neurosymbolic cognition outside of large language models.
We propose that Cognitive AI is a necessary precursor for the evolution of the forms of AI, such as AGI, and specifically claim that AGI cannot be achieved by probabilistic approaches on their own.
We conclude with a discussion of the implications for large language models, adoption cycles in AI, and commercial Cognitive AI development.
arXiv Detail & Related papers (2024-03-04T16:11:57Z) - Exploring the intersection of Generative AI and Software Development [0.0]
The synergy between generative AI and Software Engineering emerges as a transformative frontier.
This whitepaper delves into the unexplored realm, elucidating how generative AI techniques can revolutionize software development.
It serves as a guide for stakeholders, urging discussions and experiments in the application of generative AI in Software Engineering.
arXiv Detail & Related papers (2023-12-21T19:23:23Z) - End-User Development for Artificial Intelligence: A Systematic
Literature Review [2.347942013388615]
End-User Development (EUD) can allow people to create, customize, or adapt AI-based systems to their own needs.
This paper presents a literature review that aims to shed the light on the current landscape of EUD for AI systems.
arXiv Detail & Related papers (2023-04-14T09:57:36Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Structured access to AI capabilities: an emerging paradigm for safe AI
deployment [0.0]
Instead of openly disseminating AI systems, developers facilitate controlled, arm's length interactions with their AI systems.
Aim is to prevent dangerous AI capabilities from being widely accessible, whilst preserving access to AI capabilities that can be used safely.
arXiv Detail & Related papers (2022-01-13T19:30:16Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Time for AI (Ethics) Maturity Model Is Now [15.870654219935972]
This paper argues that AI software is still software and needs to be approached from the software development perspective.
We wish to discuss whether the focus should be on AI ethics or, more broadly, the quality of an AI system.
arXiv Detail & Related papers (2021-01-29T17:37:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.