Explainable artificial intelligence approaches for brain-computer
interfaces: a review and design space
- URL: http://arxiv.org/abs/2312.13033v1
- Date: Wed, 20 Dec 2023 13:56:31 GMT
- Title: Explainable artificial intelligence approaches for brain-computer
interfaces: a review and design space
- Authors: Param Rajpura, Hubert Cecotti, Yogesh Kumar Meena
- Abstract summary: This review paper provides an integrated perspective of Explainable Artificial Intelligence techniques applied to Brain-Computer Interfaces.
Brain-Computer Interfaces use predictive models to interpret brain signals for various high-stake applications.
There is a lack of an integrated perspective in XAI for BCI literature.
- Score: 6.786321327136925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This review paper provides an integrated perspective of Explainable
Artificial Intelligence techniques applied to Brain-Computer Interfaces. BCIs
use predictive models to interpret brain signals for various high-stake
applications. However, achieving explainability in these complex models is
challenging as it compromises accuracy. The field of XAI has emerged to address
the need for explainability across various stakeholders, but there is a lack of
an integrated perspective in XAI for BCI (XAI4BCI) literature. It is necessary
to differentiate key concepts like explainability, interpretability, and
understanding in this context and formulate a comprehensive framework. To
understand the need of XAI for BCI, we pose six key research questions for a
systematic review and meta-analysis, encompassing its purposes, applications,
usability, and technical feasibility. We employ the PRISMA methodology --
preferred reporting items for systematic reviews and meta-analyses to review
(n=1246) and analyze (n=84) studies published in 2015 and onwards for key
insights. The results highlight that current research primarily focuses on
interpretability for developers and researchers, aiming to justify outcomes and
enhance model performance. We discuss the unique approaches, advantages, and
limitations of XAI4BCI from the literature. We draw insights from philosophy,
psychology, and social sciences. We propose a design space for XAI4BCI,
considering the evolving need to visualize and investigate predictive model
outcomes customised for various stakeholders in the BCI development and
deployment lifecycle. This paper is the first to focus solely on reviewing
XAI4BCI research articles. This systematic review and meta-analysis findings
with the proposed design space prompt important discussions on establishing
standards for BCI explanations, highlighting current limitations, and guiding
the future of XAI in BCI.
Related papers
- User-centric evaluation of explainability of AI with and for humans: a comprehensive empirical study [5.775094401949666]
This study is located in the Human-Centered Artificial Intelligence (HCAI)
It focuses on the results of a user-centered assessment of commonly used eXplainable Artificial Intelligence (XAI) algorithms.
arXiv Detail & Related papers (2024-10-21T12:32:39Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - XEdgeAI: A Human-centered Industrial Inspection Framework with Data-centric Explainable Edge AI Approach [2.0209172586699173]
This paper introduces a novel XAI-integrated Visual Quality Inspection framework.
Our framework incorporates XAI and the Large Vision Language Model to deliver human-centered interpretability.
This approach paves the way for the broader adoption of reliable and interpretable AI tools in critical industrial applications.
arXiv Detail & Related papers (2024-07-16T14:30:24Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - Investigating Fairness Disparities in Peer Review: A Language Model
Enhanced Approach [77.61131357420201]
We conduct a thorough and rigorous study on fairness disparities in peer review with the help of large language models (LMs)
We collect, assemble, and maintain a comprehensive relational database for the International Conference on Learning Representations (ICLR) conference from 2017 to date.
We postulate and study fairness disparities on multiple protective attributes of interest, including author gender, geography, author, and institutional prestige.
arXiv Detail & Related papers (2022-11-07T16:19:42Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - AR-LSAT: Investigating Analytical Reasoning of Text [57.1542673852013]
We study the challenge of analytical reasoning of text and introduce a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016.
We analyze what knowledge understanding and reasoning abilities are required to do well on this task.
arXiv Detail & Related papers (2021-04-14T02:53:32Z) - What Do We Want From Explainable Artificial Intelligence (XAI)? -- A
Stakeholder Perspective on XAI and a Conceptual Model Guiding
Interdisciplinary XAI Research [0.8707090176854576]
Main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems.
It often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata.
arXiv Detail & Related papers (2021-02-15T19:54:33Z) - Why model why? Assessing the strengths and limitations of LIME [0.0]
This paper examines the effectiveness of the Local Interpretable Model-Agnostic Explanations (LIME) xAI framework.
LIME is one of the most popular model agnostic frameworks found in the literature.
We show how LIME can be used to supplement conventional performance assessment methods.
arXiv Detail & Related papers (2020-11-30T21:08:07Z) - Should We Trust (X)AI? Design Dimensions for Structured Experimental
Evaluations [19.68184991543289]
This paper systematically derives design dimensions for the structured evaluation of explainable artificial intelligence (XAI) approaches.
They enable a descriptive characterization, facilitating comparisons between different study designs.
They further structure the design space of XAI, converging towards a precise terminology required for a rigorous study of XAI.
arXiv Detail & Related papers (2020-09-14T13:40:51Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.