Attribution-based XAI Methods in Computer Vision: A Review
- URL: http://arxiv.org/abs/2211.14736v1
- Date: Sun, 27 Nov 2022 05:56:36 GMT
- Title: Attribution-based XAI Methods in Computer Vision: A Review
- Authors: Kumar Abhishek, Deeksha Kamath
- Abstract summary: We provide a comprehensive survey of attribution-based XAI methods in computer vision.
We review the existing literature for gradient-based, perturbation-based, and contrastive methods for XAI.
- Score: 5.076419064097734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advancements in deep learning-based methods for visual perception tasks
have seen astounding growth in the last decade, with widespread adoption in a
plethora of application areas from autonomous driving to clinical decision
support systems. Despite their impressive performance, these deep
learning-based models remain fairly opaque in their decision-making process,
making their deployment in human-critical tasks a risky endeavor. This in turn
makes understanding the decisions made by these models crucial for their
reliable deployment. Explainable AI (XAI) methods attempt to address this by
offering explanations for such black-box deep learning methods. In this paper,
we provide a comprehensive survey of attribution-based XAI methods in computer
vision and review the existing literature for gradient-based,
perturbation-based, and contrastive methods for XAI, and provide insights on
the key challenges in developing and evaluating robust XAI methods.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Evolutionary Computation and Explainable AI: A Roadmap to Understandable Intelligent Systems [37.02462866600066]
Evolutionary computation (EC) offers significant potential to contribute to explainable AI (XAI)
This paper provides an introduction to XAI and reviews current techniques for explaining machine learning models.
We then explore how EC can be leveraged in XAI and examine existing XAI approaches that incorporate EC techniques.
arXiv Detail & Related papers (2024-06-12T02:06:24Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Toward enriched Cognitive Learning with XAI [44.99833362998488]
We introduce an intelligent system (CL-XAI) for Cognitive Learning which is supported by artificial intelligence (AI) tools.
The use of CL-XAI is illustrated with a game-inspired virtual use case where learners tackle problems to enhance problem-solving skills.
arXiv Detail & Related papers (2023-12-19T16:13:47Z) - Overview of Class Activation Maps for Visualization Explainability [0.0]
Class Activation Maps (CAMs) enhance interpretability and insights into the decision-making process of deep learning models.
This work presents a comprehensive overview of the evolution of Class Activation Maps over time.
It also explores the metrics used for evaluating CAMs and introduces auxiliary techniques to improve the saliency of these methods.
arXiv Detail & Related papers (2023-09-25T17:20:51Z) - A Brief Review of Explainable Artificial Intelligence in Healthcare [7.844015105790313]
XAI refers to the techniques and methods for building AI applications.
Model explainability and interpretability are vital successful deployment of AI models in healthcare practices.
arXiv Detail & Related papers (2023-04-04T05:41:57Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Offline Reinforcement Learning: Tutorial, Review, and Perspectives on
Open Problems [108.81683598693539]
offline reinforcement learning algorithms hold tremendous promise for making it possible to turn large datasets into powerful decision making engines.
We will aim to provide the reader with an understanding of these challenges, particularly in the context of modern deep reinforcement learning methods.
arXiv Detail & Related papers (2020-05-04T17:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.