Overview of Class Activation Maps for Visualization Explainability
- URL: http://arxiv.org/abs/2309.14304v1
- Date: Mon, 25 Sep 2023 17:20:51 GMT
- Title: Overview of Class Activation Maps for Visualization Explainability
- Authors: Anh Pham Thi Minh
- Abstract summary: Class Activation Maps (CAMs) enhance interpretability and insights into the decision-making process of deep learning models.
This work presents a comprehensive overview of the evolution of Class Activation Maps over time.
It also explores the metrics used for evaluating CAMs and introduces auxiliary techniques to improve the saliency of these methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research in deep learning methodology has led to a variety of complex
modelling techniques in computer vision (CV) that reach or even outperform
human performance. Although these black-box deep learning models have obtained
astounding results, they are limited in their interpretability and transparency
which are critical to take learning machines to the next step to include them
in sensitive decision-support systems involving human supervision. Hence, the
development of explainable techniques for computer vision (XCV) has recently
attracted increasing attention. In the realm of XCV, Class Activation Maps
(CAMs) have become widely recognized and utilized for enhancing
interpretability and insights into the decision-making process of deep learning
models. This work presents a comprehensive overview of the evolution of Class
Activation Map methods over time. It also explores the metrics used for
evaluating CAMs and introduces auxiliary techniques to improve the saliency of
these methods. The overview concludes by proposing potential avenues for future
research in this evolving field.
Related papers
- Learning the Bitter Lesson: Empirical Evidence from 20 Years of CVPR Proceedings [1.3812010983144802]
This study examines the alignment of emphConference on Computer Vision and Pattern Recognition (CVPR) research with the principles of the "bitter lesson" proposed by Rich Sutton.
We analyze two decades of CVPR abstracts and titles using large language models (LLMs) to assess the field's embracement of these principles.
arXiv Detail & Related papers (2024-10-12T21:06:13Z) - Enhancing Generative Class Incremental Learning Performance with Model Forgetting Approach [50.36650300087987]
This study presents a novel approach to Generative Class Incremental Learning (GCIL) by introducing the forgetting mechanism.
We have found that integrating the forgetting mechanisms significantly enhances the models' performance in acquiring new knowledge.
arXiv Detail & Related papers (2024-03-27T05:10:38Z) - Masked Modeling for Self-supervised Representation Learning on Vision
and Beyond [69.64364187449773]
Masked modeling has emerged as a distinctive approach that involves predicting parts of the original data that are proportionally masked during training.
We elaborate on the details of techniques within masked modeling, including diverse masking strategies, recovering targets, network architectures, and more.
We conclude by discussing the limitations of current techniques and point out several potential avenues for advancing masked modeling research.
arXiv Detail & Related papers (2023-12-31T12:03:21Z) - Looking deeper into interpretable deep learning in neuroimaging: a
comprehensive survey [20.373311465258393]
This paper comprehensively reviews interpretable deep learning models in the neuroimaging domain.
We discuss how multiple recent neuroimaging studies leveraged model interpretability to capture anatomical and functional brain alterations most relevant to model predictions.
arXiv Detail & Related papers (2023-07-14T04:50:04Z) - Deep Active Learning for Computer Vision: Past and Future [50.19394935978135]
Despite its indispensable role for developing AI models, research on active learning is not as intensive as other research directions.
By addressing data automation challenges and coping with automated machine learning systems, active learning will facilitate democratization of AI technologies.
arXiv Detail & Related papers (2022-11-27T13:07:14Z) - Attribution-based XAI Methods in Computer Vision: A Review [5.076419064097734]
We provide a comprehensive survey of attribution-based XAI methods in computer vision.
We review the existing literature for gradient-based, perturbation-based, and contrastive methods for XAI.
arXiv Detail & Related papers (2022-11-27T05:56:36Z) - Dissecting Self-Supervised Learning Methods for Surgical Computer Vision [51.370873913181605]
Self-Supervised Learning (SSL) methods have begun to gain traction in the general computer vision community.
The effectiveness of SSL methods in more complex and impactful domains, such as medicine and surgery, remains limited and unexplored.
We present an extensive analysis of the performance of these methods on the Cholec80 dataset for two fundamental and popular tasks in surgical context understanding, phase recognition and tool presence detection.
arXiv Detail & Related papers (2022-07-01T14:17:11Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.