Explaining AI Without Code: A User Study on Explainable AI
- URL: http://arxiv.org/abs/2602.11159v1
- Date: Sun, 28 Dec 2025 15:44:43 GMT
- Title: Explaining AI Without Code: A User Study on Explainable AI
- Authors: Natalia Abarca, Andrés Carvallo, Claudia López Moncada, Felipe Bravo-Marquez,
- Abstract summary: We present a human-centered XAI module in DashAI, an open-source no-code ML platform.<n>A user study evaluated usability and the impact of explanations on novices and experts.
- Score: 1.7966001353008778
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing use of Machine Learning (ML) in sensitive domains such as healthcare, finance, and public policy has raised concerns about the transparency of automated decisions. Explainable AI (XAI) addresses this by clarifying how models generate predictions, yet most methods demand technical expertise, limiting their value for novices. This gap is especially critical in no-code ML platforms, which seek to democratize AI but rarely include explainability. We present a human-centered XAI module in DashAI, an open-source no-code ML platform. The module integrates three complementary techniques, which are Partial Dependence Plots (PDP), Permutation Feature Importance (PFI), and KernelSHAP, into DashAI's workflow for tabular classification. A user study (N = 20; ML novices and experts) evaluated usability and the impact of explanations. Results show: (i) high task success ($\geq80\%$) across all explainability tasks; (ii) novices rated explanations as useful, accurate, and trustworthy on the Explanation Satisfaction Scale (ESS, Cronbach's $α$ = 0.74, a measure of internal consistency), while experts were more critical of sufficiency and completeness; and (iii) explanations improved perceived predictability and confidence on the Trust in Automation scale (TiA, $α$ = 0.60), with novices showing higher trust than experts. These findings highlight a central challenge for XAI in no-code ML, making explanations both accessible to novices and sufficiently detailed for experts.
Related papers
- Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness [43.25173443756643]
This paper investigates the role of robustness through the usage of a feature importance aggregation derived from multiple models.<n>Preliminary results showcase the potential in increasing the trustworthiness of the application, while leveraging multiple model's predictive power.
arXiv Detail & Related papers (2025-10-13T08:55:45Z) - How can we trust opaque systems? Criteria for robust explanations in XAI [0.0]
Deep learning (DL) algorithms are becoming ubiquitous in everyday life and in scientific research.<n>It is unknown to laypeople and researchers alike what features of the data a DL system focuses on and how it ultimately succeeds in predicting correct outputs.<n>A necessary criterion for trustworthy explanations is that they should reflect the relevant processes the algorithms' predictions are based on.
arXiv Detail & Related papers (2025-08-18T04:38:55Z) - General Scales Unlock AI Evaluation with Explanatory and Predictive Power [57.7995945974989]
benchmarking has guided progress in AI, but it has offered limited explanatory and predictive power for general-purpose AI systems.<n>We introduce general scales for AI evaluation that can explain what common AI benchmarks really measure.<n>Our fully-automated methodology builds on 18 newly-crafted rubrics that place instance demands on general scales that do not saturate.
arXiv Detail & Related papers (2025-03-09T01:13:56Z) - Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting [43.110187812734864]
We evaluate three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities.
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.
arXiv Detail & Related papers (2024-10-16T06:43:02Z) - Are Objective Explanatory Evaluation metrics Trustworthy? An Adversarial Analysis [12.921307214813357]
The aim of the paper is to come up with a novel explanatory technique called SHifted Adversaries using Pixel Elimination.
We show that SHAPE is, infact, an adversarial explanation that fools causal metrics that are employed to measure the robustness and reliability of popular importance based visual XAI methods.
arXiv Detail & Related papers (2024-06-12T02:39:46Z) - Explainable Authorship Identification in Cultural Heritage Applications:
Analysis of a New Perspective [48.031678295495574]
We explore the applicability of existing general-purpose eXplainable Artificial Intelligence (XAI) techniques to AId.
In particular, we assess the relative merits of three different types of XAI techniques on three different AId tasks.
Our analysis shows that, while these techniques make important first steps towards explainable Authorship Identification, more work remains to be done.
arXiv Detail & Related papers (2023-11-03T20:51:15Z) - Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life [0.5115559623386964]
It is critical to have confidence in AI's trustworthiness in energy and engineering systems.
The use of explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics.
arXiv Detail & Related papers (2023-01-17T03:17:07Z) - TRUST XAI: Model-Agnostic Explanations for AI With a Case Study on IIoT
Security [0.0]
We propose a universal XAI model named Transparency Relying Upon Statistical Theory (XAI)
We show how TRUST XAI provides explanations for new random samples with an average success rate of 98%.
In the end, we also show how TRUST is explained to the user.
arXiv Detail & Related papers (2022-05-02T21:44:27Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Deceptive AI Explanations: Creation and Detection [3.197020142231916]
We investigate how AI models can be used to create and detect deceptive explanations.
As an empirical evaluation, we focus on text classification and alter the explanations generated by GradCAM.
We evaluate the effect of deceptive explanations on users in an experiment with 200 participants.
arXiv Detail & Related papers (2020-01-21T16:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.