A Context-Sensitive Approach to XAI in Music Performance
- URL: http://arxiv.org/abs/2309.04491v1
- Date: Tue, 5 Sep 2023 17:43:48 GMT
- Title: A Context-Sensitive Approach to XAI in Music Performance
- Authors: Nicola Privato and Jack Armitage
- Abstract summary: We propose an Explanatory Pragmatism (EP) framework for XAI in music performance.
EP offers a promising direction for enhancing the transparency and interpretability of AI systems in broad artistic applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapidly evolving field of Explainable Artificial Intelligence (XAI) has
generated significant interest in developing methods to make AI systems more
transparent and understandable. However, the problem of explainability cannot
be exhaustively solved in the abstract, as there is no single approach that can
be universally applied to generate adequate explanations for any given AI
system, and this is especially true in the arts. In this position paper, we
propose an Explanatory Pragmatism (EP) framework for XAI in music performance,
emphasising the importance of context and audience in the development of
explainability requirements. By tailoring explanations to specific audiences
and continuously refining them based on feedback, EP offers a promising
direction for enhancing the transparency and interpretability of AI systems in
broad artistic applications and more specifically to music performance.
Related papers
- Beyond Technocratic XAI: The Who, What & How in Explanation Design [35.987280553106565]
In practice, generating meaningful explanations is a context-dependent task.<n>This paper reframes explanation as a situated design process.<n>We propose a three-part framework for explanation design in XAI.
arXiv Detail & Related papers (2025-08-12T08:17:26Z) - From Explainable to Explanatory Artificial Intelligence: Toward a New Paradigm for Human-Centered Explanations through Generative AI [0.0]
"Explanatory AI" is a paradigm that leverages generative AI capabilities to serve as explanatory partners for human understanding.<n>We develop a conceptual model distinguishing Explanatory AI through narrative communication, adaptive personalization, and progressive disclosure principles.<n>Our findings reveal the practical urgency for AI systems designed for human comprehension rather than algorithmic introspection.
arXiv Detail & Related papers (2025-08-08T14:32:41Z) - ClarifAI: Enhancing AI Interpretability and Transparency through Case-Based Reasoning and Ontology-Driven Approach for Improved Decision-Making [0.0]
ClarifAI is a novel approach to augment the transparency and interpretability of artificial intelligence (AI)<n>The paper elaborates on ClarifAI's theoretical foundations, combining CBR and CBR to furnish exhaustive explanation.<n>It further elaborates on the design principles and architectural blueprint, highlighting ClarifAI's potential to enhance AI interpretability.
arXiv Detail & Related papers (2025-07-15T21:02:28Z) - Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs [11.11196150521188]
This paper addresses how trust in AI is influenced by the design and delivery of explanations.<n>The framework consists of three layers: algorithmic and domain-based, human-centered, and social explainability.
arXiv Detail & Related papers (2025-06-06T08:54:41Z) - Cutting Through the Confusion and Hype: Understanding the True Potential of Generative AI [0.0]
This paper explores the nuanced landscape of generative AI (genAI)
It focuses on neural network-based models like Large Language Models (LLMs)
arXiv Detail & Related papers (2024-10-22T02:18:44Z) - Applications of Explainable artificial intelligence in Earth system science [12.454478986296152]
This review aims to provide a foundational understanding of explainable AI (XAI)
XAI offers a set of powerful tools that make the models more transparent.
We identify four significant challenges that XAI faces within the Earth system science (ESS)
A visionary outlook for ESS envisions a harmonious blend where process-based models govern the known, AI models explore the unknown, and XAI bridges the gap by providing explanations.
arXiv Detail & Related papers (2024-06-12T15:05:29Z) - Emergent Explainability: Adding a causal chain to neural network
inference [0.0]
This position paper presents a theoretical framework for enhancing explainable artificial intelligence (xAI) through emergent communication (EmCom)
We explore the novel integration of EmCom into AI systems, offering a paradigm shift from conventional associative relationships between inputs and outputs to a more nuanced, causal interpretation.
The paper discusses the theoretical underpinnings of this approach, its potential broad applications, and its alignment with the growing need for responsible and transparent AI systems.
arXiv Detail & Related papers (2024-01-29T02:28:39Z) - Beyond XAI:Obstacles Towards Responsible AI [0.0]
Methods of explainability and their evaluation strategies present numerous limitations in real-world contexts.
In this paper, we explore these limitations and discuss their implications in a boarder context of responsible AI.
arXiv Detail & Related papers (2023-09-07T11:08:14Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z) - Making Things Explainable vs Explaining: Requirements and Challenges
under the GDPR [2.578242050187029]
ExplanatorY AI (YAI) builds over XAI with the goal to collect and organize explainable information.
We represent the problem of generating explanations for Automated Decision-Making systems (ADMs) into the identification of an appropriate path over an explanatory space.
arXiv Detail & Related papers (2021-10-02T08:48:47Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.