ODAM: Gradient-based instance-specific visual explanations for object
detection
- URL: http://arxiv.org/abs/2304.06354v1
- Date: Thu, 13 Apr 2023 09:20:26 GMT
- Title: ODAM: Gradient-based instance-specific visual explanations for object
detection
- Authors: Chenyang Zhao and Antoni B. Chan
- Abstract summary: gradient-weighted Object Detector Activation Maps (ODAM)
ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute.
We propose Odam-NMS, which considers the information of the model's explanation for each prediction to distinguish duplicate detected objects.
- Score: 51.476702316759635
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the gradient-weighted Object Detector Activation Maps (ODAM), a
visualized explanation technique for interpreting the predictions of object
detectors. Utilizing the gradients of detector targets flowing into the
intermediate feature maps, ODAM produces heat maps that show the influence of
regions on the detector's decision for each predicted attribute. Compared to
previous works classification activation maps (CAM), ODAM generates
instance-specific explanations rather than class-specific ones. We show that
ODAM is applicable to both one-stage detectors and two-stage detectors with
different types of detector backbones and heads, and produces higher-quality
visual explanations than the state-of-the-art both effectively and efficiently.
We next propose a training scheme, Odam-Train, to improve the explanation
ability on object discrimination of the detector through encouraging
consistency between explanations for detections on the same object, and
distinct explanations for detections on different objects. Based on the heat
maps produced by ODAM with Odam-Train, we propose Odam-NMS, which considers the
information of the model's explanation for each prediction to distinguish the
duplicate detected objects. We present a detailed analysis of the visualized
explanations of detectors and carry out extensive experiments to validate the
effectiveness of the proposed ODAM.
Related papers
- On the Inherent Robustness of One-Stage Object Detection against Out-of-Distribution Data [6.7236795813629]
We propose a novel detection algorithm for detecting unknown objects in image data.
It exploits supervised dimensionality reduction techniques to mitigate the effects of the curse of dimensionality on the features extracted by the model.
It utilizes high-resolution feature maps to identify potential unknown objects in an unsupervised fashion.
arXiv Detail & Related papers (2024-11-07T10:15:25Z) - DExT: Detector Explanation Toolkit [5.735035463793008]
State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations.
We propose an open-source Detector Explanation Toolkit (DExT) which implements a holistic explanation for all detector decisions.
arXiv Detail & Related papers (2022-12-21T23:28:53Z) - Explaining YOLO: Leveraging Grad-CAM to Explain Object Detections [2.0496125856846605]
We show how to integrate Grad-CAM into the model architecture and analyze the results.
We show how to compute attribution-based explanations for individual detections and find that the normalization of the results has a great impact on their interpretation.
arXiv Detail & Related papers (2022-11-22T09:19:13Z) - OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors
on LiDAR Data [8.486063950768694]
We propose a method to generate attribution maps for 3D object detection in LiDAR point clouds.
These maps indicate the importance of each 3D point in predicting the specific objects.
We show a detailed evaluation of the attribution maps and demonstrate that they are interpretable and highly informative.
arXiv Detail & Related papers (2022-04-13T18:00:30Z) - Perceiving Traffic from Aerial Images [86.994032967469]
We propose an object detection method called Butterfly Detector that is tailored to detect objects in aerial images.
We evaluate our Butterfly Detector on two publicly available UAV datasets (UAVDT and VisDrone 2019) and show that it outperforms previous state-of-the-art methods while remaining real-time.
arXiv Detail & Related papers (2020-09-16T11:37:43Z) - Black-box Explanation of Object Detectors via Saliency Maps [66.745167677293]
We propose D-RISE, a method for generating visual explanations for the predictions of object detectors.
We show that D-RISE can be easily applied to different object detectors including one-stage detectors such as YOLOv3 and two-stage detectors such as Faster-RCNN.
arXiv Detail & Related papers (2020-06-05T02:13:35Z) - Adaptive Object Detection with Dual Multi-Label Prediction [78.69064917947624]
We propose a novel end-to-end unsupervised deep domain adaptation model for adaptive object detection.
The model exploits multi-label prediction to reveal the object category information in each image.
We introduce a prediction consistency regularization mechanism to assist object detection.
arXiv Detail & Related papers (2020-03-29T04:23:22Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - EHSOD: CAM-Guided End-to-end Hybrid-Supervised Object Detection with
Cascade Refinement [53.69674636044927]
We present EHSOD, an end-to-end hybrid-supervised object detection system.
It can be trained in one shot on both fully and weakly-annotated data.
It achieves comparable results on multiple object detection benchmarks with only 30% fully-annotated data.
arXiv Detail & Related papers (2020-02-18T08:04:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.