Bridging the Gap Between Explainable AI and Uncertainty Quantification
to Enhance Trustability
- URL: http://arxiv.org/abs/2105.11828v1
- Date: Tue, 25 May 2021 10:53:58 GMT
- Title: Bridging the Gap Between Explainable AI and Uncertainty Quantification
to Enhance Trustability
- Authors: Dominik Seu{\ss}
- Abstract summary: Two research directions, namely Explainable AI and Uncertainty Quantification are becoming more and more important, but have been so far never combined and jointly explored.
I show how both research areas provide potential for combination, why more research should be done in this direction and how this would lead to an increase in trustability in AI systems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: After the tremendous advances of deep learning and other AI methods, more
attention is flowing into other properties of modern approaches, such as
interpretability, fairness, etc. combined in frameworks like Responsible AI.
Two research directions, namely Explainable AI and Uncertainty Quantification
are becoming more and more important, but have been so far never combined and
jointly explored. In this paper, I show how both research areas provide
potential for combination, why more research should be done in this direction
and how this would lead to an increase in trustability in AI systems.
Related papers
- Imagining and building wise machines: The centrality of AI metacognition [78.76893632793497]
We argue that shortcomings stem from one overarching failure: AI systems lack wisdom.
While AI research has focused on task-level strategies, metacognition is underdeveloped in AI systems.
We propose that integrating metacognitive capabilities into AI systems is crucial for enhancing their robustness, explainability, cooperation, and safety.
arXiv Detail & Related papers (2024-11-04T18:10:10Z) - Safetywashing: Do AI Safety Benchmarks Actually Measure Safety Progress? [59.96471873997733]
We propose an empirical foundation for developing more meaningful safety metrics and define AI safety in a machine learning research context.
We aim to provide a more rigorous framework for AI safety research, advancing the science of safety evaluations and clarifying the path towards measurable progress.
arXiv Detail & Related papers (2024-07-31T17:59:24Z) - A Survey of AI Reliance [1.6124402884077915]
Current shortcomings in the literature include unclear influences on AI reliance, lack of external validity, conflicting approaches to measuring reliance, and disregard for a change in reliance over time.
In conclusion, we present a morphological box that serves as a guide for research on AI reliance.
arXiv Detail & Related papers (2024-07-22T09:34:58Z) - Position: An Inner Interpretability Framework for AI Inspired by Lessons from Cognitive Neuroscience [4.524832437237367]
Inner Interpretability is a promising field tasked with uncovering the inner mechanisms of AI systems.
Recent critiques raise issues that question its usefulness to advance the broader goals of AI.
Here we draw the relevant connections and highlight lessons that can be transferred productively between fields.
arXiv Detail & Related papers (2024-06-03T14:16:56Z) - Beyond XAI:Obstacles Towards Responsible AI [0.0]
Methods of explainability and their evaluation strategies present numerous limitations in real-world contexts.
In this paper, we explore these limitations and discuss their implications in a boarder context of responsible AI.
arXiv Detail & Related papers (2023-09-07T11:08:14Z) - Fairness in AI and Its Long-Term Implications on Society [68.8204255655161]
We take a closer look at AI fairness and analyze how lack of AI fairness can lead to deepening of biases over time.
We discuss how biased models can lead to more negative real-world outcomes for certain groups.
If the issues persist, they could be reinforced by interactions with other risks and have severe implications on society in the form of social unrest.
arXiv Detail & Related papers (2023-04-16T11:22:59Z) - A Systematic Literature Review of Human-Centered, Ethical, and
Responsible AI [12.456385305888341]
We review and analyze 164 research papers from leading conferences in ethical, social, and human factors of AI.
We find that the current emphasis on governance and fairness in AI research may not adequately address the potential unforeseen and unknown implications of AI.
arXiv Detail & Related papers (2023-02-10T14:47:33Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Trustworthy AI: A Computational Perspective [54.80482955088197]
We focus on six of the most crucial dimensions in achieving trustworthy AI: (i) Safety & Robustness, (ii) Non-discrimination & Fairness, (iii) Explainability, (iv) Privacy, (v) Accountability & Auditability, and (vi) Environmental Well-Being.
For each dimension, we review the recent related technologies according to a taxonomy and summarize their applications in real-world systems.
arXiv Detail & Related papers (2021-07-12T14:21:46Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.