AI Explainability 360: Impact and Design
- URL: http://arxiv.org/abs/2109.12151v1
- Date: Fri, 24 Sep 2021 19:17:09 GMT
- Title: AI Explainability 360: Impact and Design
- Authors: Vijay Arya, Rachel K. E. Bellamy, Pin-Yu Chen, Amit Dhurandhar,
Michael Hind, Samuel C. Hoffman, Stephanie Houde, Q. Vera Liao, Ronny Luss,
Aleksandra Mojsilovic, Sami Mourad, Pablo Pedemonte, Ramya Raghavendra, John
Richards, Prasanna Sattigeri, Karthikeyan Shanmugam, Moninder Singh, Kush R.
Varshney, Dennis Wei, Yunfeng Zhang
- Abstract summary: In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
- Score: 120.95633114160688
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As artificial intelligence and machine learning algorithms become
increasingly prevalent in society, multiple stakeholders are calling for these
algorithms to provide explanations. At the same time, these stakeholders,
whether they be affected citizens, government regulators, domain experts, or
system developers, have different explanation needs. To address these needs, in
2019, we created AI Explainability 360 (Arya et al. 2020), an open source
software toolkit featuring ten diverse and state-of-the-art explainability
methods and two evaluation metrics. This paper examines the impact of the
toolkit with several case studies, statistics, and community feedback. The
different ways in which users have experienced AI Explainability 360 have
resulted in multiple types of impact and improvements in multiple metrics,
highlighted by the adoption of the toolkit by the independent LF AI & Data
Foundation. The paper also describes the flexible design of the toolkit,
examples of its use, and the significant educational material and documentation
available to its users.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - AI-Tutoring in Software Engineering Education [0.7631288333466648]
We conducted an exploratory case study by integrating the GPT-3.5-Turbo model as an AI-Tutor within the APAS Artemis.
The findings highlight advantages, such as timely feedback and scalability.
However, challenges like generic responses and students' concerns about a learning progress inhibition when using the AI-Tutor were also evident.
arXiv Detail & Related papers (2024-04-03T08:15:08Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Measuring Ethics in AI with AI: A Methodology and Dataset Construction [1.6861004263551447]
We propose to use such newfound capabilities of AI technologies to augment our AI measuring capabilities.
We do so by training a model to classify publications related to ethical issues and concerns.
We highlight the implications of AI metrics, in particular their contribution towards developing trustful and fair AI-based tools and technologies.
arXiv Detail & Related papers (2021-07-26T00:26:12Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Can Explainable AI Explain Unfairness? A Framework for Evaluating
Explainable AI [3.4823710414760516]
Despite XAI tools' strength in translating model behavior, critiques have raised concerns about the impact of XAI tools as a tool for fairwashing
We created a framework for evaluating explainable AI tools with respect to their capabilities for detecting and addressing issues of bias and fairness.
We found that despite their capabilities in simplifying and explaining model behavior, many prominent XAI tools lack features that could be critical in detecting bias.
arXiv Detail & Related papers (2021-06-14T15:14:03Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.