Preliminary Quantitative Study on Explainability and Trust in AI Systems
- URL: http://arxiv.org/abs/2510.15769v1
- Date: Fri, 17 Oct 2025 15:59:28 GMT
- Title: Preliminary Quantitative Study on Explainability and Trust in AI Systems
- Authors: Allen Daniel Sunny,
- Abstract summary: Large-scale AI models such as GPT-4 have accelerated the deployment of artificial intelligence across critical domains including law, healthcare, and finance.<n>This study investigates the relationship between explainability and user trust in AI systems through a quantitative experimental design.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large-scale AI models such as GPT-4 have accelerated the deployment of artificial intelligence across critical domains including law, healthcare, and finance, raising urgent questions about trust and transparency. This study investigates the relationship between explainability and user trust in AI systems through a quantitative experimental design. Using an interactive, web-based loan approval simulation, we compare how different types of explanations, ranging from basic feature importance to interactive counterfactuals influence perceived trust. Results suggest that interactivity enhances both user engagement and confidence, and that the clarity and relevance of explanations are key determinants of trust. These findings contribute empirical evidence to the growing field of human-centered explainable AI, highlighting measurable effects of explainability design on user perception
Related papers
- When Models Know More Than They Can Explain: Quantifying Knowledge Transfer in Human-AI Collaboration [79.69935257008467]
We introduce Knowledge Integration and Transfer Evaluation (KITE), a conceptual and experimental framework for Human-AI knowledge transfer capabilities.<n>We conduct the first large-scale human study (N=118) explicitly designed to measure it.<n>In our two-phase setup, humans first ideate with an AI on problem-solving strategies, then independently implement solutions, isolating model explanations' influence on human understanding.
arXiv Detail & Related papers (2025-06-05T20:48:16Z) - Would You Rely on an Eerie Agent? A Systematic Review of the Impact of the Uncanny Valley Effect on Trust in Human-Agent Interaction [2.184775414778289]
The Uncanny Valley Effect (UVE) is where increasingly human-like artificial beings can be perceived as eerie or repelling.<n>Despite growing interest in trust and the UVE, existing research varies widely in terms of how these concepts are defined and operationalized.<n>This review aims to examine the impact of the UVE on human trust in agents and to identify methodological patterns, limitations, and gaps in the existing empirical literature.
arXiv Detail & Related papers (2025-05-08T17:03:26Z) - Is Trust Correlated With Explainability in AI? A Meta-Analysis [0.0]
We conduct a comprehensive examination of the existing literature to explore the relationship between AI explainability and trust.<n>Our analysis, incorporating data from 90 studies, reveals a statistically significant but moderate positive correlation between the explainability of AI systems and the trust they engender.<n>This research highlights its broader socio-technical implications, particularly in promoting accountability and fostering user trust in critical domains such as healthcare and justice.
arXiv Detail & Related papers (2025-04-16T23:30:55Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - Requirements for Explainability and Acceptance of Artificial
Intelligence in Collaborative Work [0.0]
The present structured literature analysis examines the requirements for the explainability and acceptance of AI.
Results indicate that the two main groups of users are developers who require information about the internal operations of the model.
The acceptance of AI systems depends on information about the system's functions and performance, privacy and ethical considerations.
arXiv Detail & Related papers (2023-06-27T11:36:07Z) - Improving Model Understanding and Trust with Counterfactual Explanations
of Model Confidence [4.385390451313721]
Showing confidence scores in human-agent interaction systems can help build trust between humans and AI systems.
Most existing research only used the confidence score as a form of communication.
This paper presents two methods for understanding model confidence using counterfactual explanation.
arXiv Detail & Related papers (2022-06-06T04:04:28Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Study on Multimodal and Interactive Explanations for Visual Question
Answering [3.086885687016963]
We evaluate multimodal explanations in the setting of a Visual Question Answering (VQA) task.
Results indicate that the explanations help improve human prediction accuracy, especially in trials when the VQA system's answer is inaccurate.
We introduce active attention, a novel method for evaluating causal attentional effects through intervention by editing attention maps.
arXiv Detail & Related papers (2020-03-01T07:54:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.