Towards Better Explanations for Object Detection
- URL: http://arxiv.org/abs/2306.02744v2
- Date: Tue, 6 Jun 2023 04:30:41 GMT
- Title: Towards Better Explanations for Object Detection
- Authors: Van Binh Truong, Truong Thanh Hung Nguyen, Vo Thanh Khang Nguyen, Quoc
Khanh Nguyen, Quoc Hung Cao
- Abstract summary: This paper proposes a method to explain the decision for any object detection model called D-CLOSE.
We performed tests on the MS-COCO dataset with the YOLOX model, which shows that our method outperforms D-RISE.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in Artificial Intelligence (AI) technology have promoted
their use in almost every field. The growing complexity of deep neural networks
(DNNs) makes it increasingly difficult and important to explain the inner
workings and decisions of the network. However, most current techniques for
explaining DNNs focus mainly on interpreting classification tasks. This paper
proposes a method to explain the decision for any object detection model called
D-CLOSE. To closely track the model's behavior, we used multiple levels of
segmentation on the image and a process to combine them. We performed tests on
the MS-COCO dataset with the YOLOX model, which shows that our method
outperforms D-RISE and can give a better quality and less noise explanation.
Related papers
- CAManim: Animating end-to-end network activation maps [0.2509487459755192]
We propose a novel XAI visualization method denoted CAManim that seeks to broaden and focus end-user understanding of CNN predictions.
We additionally propose a novel quantitative assessment that expands upon the Remove and Debias (ROAD) metric.
This builds upon prior research to address the increasing demand for interpretable, robust, and transparent model assessment methodology.
arXiv Detail & Related papers (2023-12-19T01:07:36Z) - Interpretability of an Interaction Network for identifying $H
\rightarrow b\bar{b}$ jets [4.553120911976256]
In recent times, AI models based on deep neural networks are becoming increasingly popular for many of these applications.
We explore interpretability of AI models by examining an Interaction Network (IN) model designed to identify boosted $Hto bbarb$ jets.
We additionally illustrate the activity of hidden layers within the IN model as Neural Activation Pattern (NAP) diagrams.
arXiv Detail & Related papers (2022-11-23T08:38:52Z) - A Detailed Study of Interpretability of Deep Neural Network based Top
Taggers [3.8541104292281805]
Recent developments in explainable AI (XAI) allow researchers to explore the inner workings of deep neural networks (DNNs)
We explore interpretability of models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC)
Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models.
arXiv Detail & Related papers (2022-10-09T23:02:42Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Comparison Analysis of Traditional Machine Learning and Deep Learning
Techniques for Data and Image Classification [62.997667081978825]
The purpose of the study is to analyse and compare the most common machine learning and deep learning techniques used for computer vision 2D object classification tasks.
Firstly, we will present the theoretical background of the Bag of Visual words model and Deep Convolutional Neural Networks (DCNN)
Secondly, we will implement a Bag of Visual Words model, the VGG16 CNN Architecture.
arXiv Detail & Related papers (2022-04-11T11:34:43Z) - Scene Understanding for Autonomous Driving [0.0]
We study the behaviour of different configurations of RetinaNet, Faster R-CNN and Mask R-CNN presented in Detectron2.
We observe a significant improvement in performance after fine-tuning these models on the datasets of interest.
We run inference in unusual situations using out of context datasets, and present interesting results.
arXiv Detail & Related papers (2021-05-11T09:50:05Z) - Densely Nested Top-Down Flows for Salient Object Detection [137.74130900326833]
This paper revisits the role of top-down modeling in salient object detection.
It designs a novel densely nested top-down flows (DNTDF)-based framework.
In every stage of DNTDF, features from higher levels are read in via the progressive compression shortcut paths (PCSP)
arXiv Detail & Related papers (2021-02-18T03:14:02Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - An Adversarial Approach for Explaining the Predictions of Deep Neural
Networks [9.645196221785694]
We present a novel algorithm for explaining the predictions of a deep neural network (DNN) using adversarial machine learning.
Our approach identifies the relative importance of input features in relation to the predictions based on the behavior of an adversarial attack on the DNN.
Our analysis enables us to produce consistent and efficient explanations.
arXiv Detail & Related papers (2020-05-20T18:06:53Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.