Notion of Explainable Artificial Intelligence -- An Empirical
Investigation from A Users Perspective
- URL: http://arxiv.org/abs/2311.02102v1
- Date: Wed, 1 Nov 2023 22:20:14 GMT
- Title: Notion of Explainable Artificial Intelligence -- An Empirical
Investigation from A Users Perspective
- Authors: AKM Bahalul Haque, A.K.M. Najmul Islam, Patrick Mikalef
- Abstract summary: This study aims to investigate usercentric explainable AI and considered recommendation systems as the study context.
We conducted focus group interviews to collect qualitative data on the recommendation system.
Our findings reveal that end users want a non-technical and tailor-made explanation with on-demand supplementary information.
- Score: 0.3069335774032178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing attention to artificial intelligence-based applications has led
to research interest in explainability issues. This emerging research attention
on explainable AI (XAI) advocates the need to investigate end user-centric
explainable AI. Thus, this study aims to investigate usercentric explainable AI
and considered recommendation systems as the study context. We conducted focus
group interviews to collect qualitative data on the recommendation system. We
asked participants about the end users' comprehension of a recommended item,
its probable explanation, and their opinion of making a recommendation
explainable. Our findings reveal that end users want a non-technical and
tailor-made explanation with on-demand supplementary information. Moreover, we
also observed users requiring an explanation about personal data usage,
detailed user feedback, and authentic and reliable explanations. Finally, we
propose a synthesized framework that aims at involving the end user in the
development process for requirements collection and validation.
Related papers
- On Evaluating Explanation Utility for Human-AI Decision Making in NLP [39.58317527488534]
We review existing metrics suitable for application-grounded evaluation.
We demonstrate the importance of reassessing the state of the art to form and study human-AI teams.
arXiv Detail & Related papers (2024-07-03T23:53:27Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System [0.5937476291232802]
We aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences.
We conducted a qualitative user study to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS.
arXiv Detail & Related papers (2023-06-09T10:48:04Z) - Assisting Human Decisions in Document Matching [52.79491990823573]
We devise a proxy matching task that allows us to evaluate which kinds of assistive information improve decision makers' performance.
We find that providing black-box model explanations reduces users' accuracy on the matching task.
On the other hand, custom methods that are designed to closely attend to some task-specific desiderata are found to be effective in improving user performance.
arXiv Detail & Related papers (2023-02-16T17:45:20Z) - Selective Explanations: Leveraging Human Input to Align Explainable AI [40.33998268146951]
We propose a general framework for generating selective explanations by leveraging human input on a small sample.
As a showcase, we use a decision-support task to explore selective explanations based on what the decision-maker would consider relevant to the decision task.
Our experiments demonstrate the promise of selective explanations in reducing over-reliance on AI.
arXiv Detail & Related papers (2023-01-23T19:00:02Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Directions for Explainable Knowledge-Enabled Systems [3.7250420821969827]
We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
arXiv Detail & Related papers (2020-03-17T04:34:29Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.