Knowledge-intensive Language Understanding for Explainable AI
- URL: http://arxiv.org/abs/2108.01174v1
- Date: Mon, 2 Aug 2021 21:12:30 GMT
- Title: Knowledge-intensive Language Understanding for Explainable AI
- Authors: Amit Sheth, Manas Gaur, Kaushik Roy, Keyur Faldu
- Abstract summary: How AI-led decisions are made and what determining factors were included are crucial to understand.
It is critical to have human-centered explanations that are directly related to decision-making.
It is necessary to involve explicit domain knowledge that humans understand and use.
- Score: 9.541228711585886
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: AI systems have seen significant adoption in various domains. At the same
time, further adoption in some domains is hindered by inability to fully trust
an AI system that it will not harm a human. Besides the concerns for fairness,
privacy, transparency, and explainability are key to developing trusts in AI
systems. As stated in describing trustworthy AI "Trust comes through
understanding. How AI-led decisions are made and what determining factors were
included are crucial to understand." The subarea of explaining AI systems has
come to be known as XAI. Multiple aspects of an AI system can be explained;
these include biases that the data might have, lack of data points in a
particular region of the example space, fairness of gathering the data, feature
importances, etc. However, besides these, it is critical to have human-centered
explanations that are directly related to decision-making similar to how a
domain expert makes decisions based on "domain knowledge," that also include
well-established, peer-validated explicit guidelines. To understand and
validate an AI system's outcomes (such as classification, recommendations,
predictions), that lead to developing trust in the AI system, it is necessary
to involve explicit domain knowledge that humans understand and use.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - Survey of Trustworthy AI: A Meta Decision of AI [0.41292255339309647]
Trusting an opaque system involves deciding on the level of Trustworthy AI (TAI)
To underpin these domains, we create ten dimensions to measure trust: explainability/transparency, fairness/diversity, generalizability, privacy, data governance, safety/robustness, accountability, reliability, and sustainability.
arXiv Detail & Related papers (2023-06-01T06:25:01Z) - Alterfactual Explanations -- The Relevance of Irrelevance for Explaining
AI Systems [0.9542023122304099]
We argue that in order to fully understand a decision, not only knowledge about relevant features is needed, but that the awareness of irrelevant information also highly contributes to the creation of a user's mental model of an AI system.
Our approach, which we call Alterfactual Explanations, is based on showing an alternative reality where irrelevant features of an AI's input are altered.
We show that alterfactual explanations are suited to convey an understanding of different aspects of the AI's reasoning than established counterfactual explanation methods.
arXiv Detail & Related papers (2022-07-19T16:20:37Z) - Never trust, always verify : a roadmap for Trustworthy AI? [12.031113181911627]
We examine trust in the context of AI-based systems to understand what it means for an AI system to be trustworthy.
We suggest a trust (resp. zero-trust) model for AI and suggest a set of properties that should be satisfied to ensure the trustworthiness of AI systems.
arXiv Detail & Related papers (2022-06-23T21:13:10Z) - A Human-Centric Assessment Framework for AI [11.065260433086024]
There is no agreed standard on how explainable AI systems should be assessed.
Inspired by the Turing test, we introduce a human-centric assessment framework.
This setup can serve as framework for a wide range of human-centric AI system assessments.
arXiv Detail & Related papers (2022-05-25T12:59:13Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - The human-AI relationship in decision-making: AI explanation to support
people on justifying their decisions [4.169915659794568]
People need more awareness of how AI works and its outcomes to build a relationship with that system.
In decision-making scenarios, people need more awareness of how AI works and its outcomes to build a relationship with that system.
arXiv Detail & Related papers (2021-02-10T14:28:34Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.