TrustyAI Explainability Toolkit
- URL: http://arxiv.org/abs/2104.12717v1
- Date: Mon, 26 Apr 2021 17:00:32 GMT
- Title: TrustyAI Explainability Toolkit
- Authors: Rob Geada, Tommaso Teofili, Rui Vieira, Rebecca Whitworth, Daniele
Zonca
- Abstract summary: We will look at how TrustyAI can support trust in decision services and predictive models.
We investigate techniques such as LIME, SHAP and counterfactuals.
We also look into an extended version of SHAP, which supports background data selection to be evaluated.
- Score: 1.0499611180329804
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial intelligence (AI) is becoming increasingly more popular and can be
found in workplaces and homes around the world. However, how do we ensure trust
in these systems? Regulation changes such as the GDPR mean that users have a
right to understand how their data has been processed as well as saved.
Therefore if, for example, you are denied a loan you have the right to ask why.
This can be hard if the method for working this out uses "black box" machine
learning techniques such as neural networks. TrustyAI is a new initiative which
looks into explainable artificial intelligence (XAI) solutions to address
trustworthiness in ML as well as decision services landscapes.
In this paper we will look at how TrustyAI can support trust in decision
services and predictive models. We investigate techniques such as LIME, SHAP
and counterfactuals, benchmarking both LIME and counterfactual techniques
against existing implementations. We also look into an extended version of
SHAP, which supports background data selection to be evaluated based on
quantitative data and allows for error bounds.
Related papers
- A Unified Framework for Evaluating the Effectiveness and Enhancing the Transparency of Explainable AI Methods in Real-World Applications [2.0681376988193843]
This study introduces a single evaluation framework for XAI.<n>It uses both numbers and user feedback to check if the explanations are correct, easy to understand, fair, complete, and reliable.<n>We show the value of this framework through case studies in healthcare, finance, farming, and self-driving systems.
arXiv Detail & Related papers (2024-12-05T05:30:10Z) - Explainable AI needs formal notions of explanation correctness [2.1309989863595677]
Machine learning in critical domains such as medicine poses risks and requires regulation.
One requirement is that decisions of ML systems in high-risk applications should be human-understandable.
In its current form, XAI is unfit to provide quality control for ML; it itself needs scrutiny.
arXiv Detail & Related papers (2024-09-22T20:47:04Z) - Mathematical Algorithm Design for Deep Learning under Societal and
Judicial Constraints: The Algorithmic Transparency Requirement [65.26723285209853]
We derive a framework to analyze whether a transparent implementation in a computing model is feasible.
Based on previous results, we find that Blum-Shub-Smale Machines have the potential to establish trustworthy solvers for inverse problems.
arXiv Detail & Related papers (2024-01-18T15:32:38Z) - Trustworthy AI: Deciding What to Decide [41.10597843436572]
We propose a novel framework of Trustworthy AI (TAI) encompassing crucial components of AI.
We aim to use this framework to conduct the TAI experiments by quantitive and qualitative research methods.
We formulate an optimal prediction model for applying the strategic investment decision of credit default swaps (CDS) in the technology sector.
arXiv Detail & Related papers (2023-11-21T13:43:58Z) - On Responsible Machine Learning Datasets with Fairness, Privacy, and Regulatory Norms [56.119374302685934]
There have been severe concerns over the trustworthiness of AI technologies.
Machine and deep learning algorithms depend heavily on the data used during their development.
We propose a framework to evaluate the datasets through a responsible rubric.
arXiv Detail & Related papers (2023-10-24T14:01:53Z) - Survey of Trustworthy AI: A Meta Decision of AI [0.41292255339309647]
Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI)
To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reliability, and sustainability.
arXiv Detail & Related papers (2023-06-01T06:25:01Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Explainable, Interpretable & Trustworthy AI for Intelligent Digital Twin: Case Study on Remaining Useful Life [0.5115559623386964]
It is critical to have confidence in AI's trustworthiness in energy and engineering systems.
The use of explainable AI (XAI) and interpretable machine learning (IML) is crucial for the accurate prediction of prognostics.
arXiv Detail & Related papers (2023-01-17T03:17:07Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - Explainable AI via Learning to Optimize [2.8010955192967852]
Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI)
This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged.
arXiv Detail & Related papers (2022-04-29T15:57:03Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Statistical stability indices for LIME: obtaining reliable explanations
for Machine Learning models [60.67142194009297]
The ever increasing usage of Machine Learning techniques is the clearest example of such trend.
It is very difficult to understand on what grounds the algorithm took the decision.
It is important for the practitioner to be aware of the issue, as well as to have a tool for spotting it.
arXiv Detail & Related papers (2020-01-31T10:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.