DExT: Detector Explanation Toolkit
- URL: http://arxiv.org/abs/2212.11409v2
- Date: Sun, 4 Jun 2023 18:03:15 GMT
- Title: DExT: Detector Explanation Toolkit
- Authors: Deepan Chakravarthi Padmanabhan, Paul G. Pl\"oger, Octavio Arriaga,
Matias Valdenegro-Toro
- Abstract summary: State-of-the-art object detectors are treated as black boxes due to their highly non-linear internal computations.
We propose an open-source Detector Explanation Toolkit (DExT) which implements a holistic explanation for all detector decisions.
- Score: 5.735035463793008
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: State-of-the-art object detectors are treated as black boxes due to their
highly non-linear internal computations. Even with unprecedented advancements
in detector performance, the inability to explain how their outputs are
generated limits their use in safety-critical applications. Previous work fails
to produce explanations for both bounding box and classification decisions, and
generally make individual explanations for various detectors. In this paper, we
propose an open-source Detector Explanation Toolkit (DExT) which implements the
proposed approach to generate a holistic explanation for all detector decisions
using certain gradient-based explanation methods. We suggests various
multi-object visualization methods to merge the explanations of multiple
objects detected in an image as well as the corresponding detections in a
single image. The quantitative evaluation show that the Single Shot MultiBox
Detector (SSD) is more faithfully explained compared to other detectors
regardless of the explanation methods. Both quantitative and human-centric
evaluations identify that SmoothGrad with Guided Backpropagation (GBP) provides
more trustworthy explanations among selected methods across all detectors. We
expect that DExT will motivate practitioners to evaluate object detectors from
the interpretability perspective by explaining both bounding box and
classification decisions.
Related papers
- On the Inherent Robustness of One-Stage Object Detection against Out-of-Distribution Data [6.7236795813629]
We propose a novel detection algorithm for detecting unknown objects in image data.
It exploits supervised dimensionality reduction techniques to mitigate the effects of the curse of dimensionality on the features extracted by the model.
It utilizes high-resolution feature maps to identify potential unknown objects in an unsupervised fashion.
arXiv Detail & Related papers (2024-11-07T10:15:25Z) - Bayesian Detector Combination for Object Detection with Crowdsourced Annotations [49.43709660948812]
Acquiring fine-grained object detection annotations in unconstrained images is time-consuming, expensive, and prone to noise.
We propose a novel Bayesian Detector Combination (BDC) framework to more effectively train object detectors with noisy crowdsourced annotations.
BDC is model-agnostic, requires no prior knowledge of the annotators' skill level, and seamlessly integrates with existing object detection models.
arXiv Detail & Related papers (2024-07-10T18:00:54Z) - Transcending Forgery Specificity with Latent Space Augmentation for Generalizable Deepfake Detection [57.646582245834324]
We propose a simple yet effective deepfake detector called LSDA.
It is based on a idea: representations with a wider variety of forgeries should be able to learn a more generalizable decision boundary.
We show that our proposed method is surprisingly effective and transcends state-of-the-art detectors across several widely used benchmarks.
arXiv Detail & Related papers (2023-11-19T09:41:10Z) - Linear Object Detection in Document Images using Multiple Object
Tracking [58.720142291102135]
Linear objects convey substantial information about document structure.
Many approaches can recover some vector representation, but only one closed-source technique introduced in 1994.
We propose a framework for accurate instance segmentation of linear objects in document images using Multiple Object Tracking.
arXiv Detail & Related papers (2023-05-26T14:22:03Z) - ODAM: Gradient-based instance-specific visual explanations for object
detection [51.476702316759635]
gradient-weighted Object Detector Activation Maps (ODAM)
ODAM produces heat maps that show the influence of regions on the detector's decision for each predicted attribute.
We propose Odam-NMS, which considers the information of the model's explanation for each prediction to distinguish duplicate detected objects.
arXiv Detail & Related papers (2023-04-13T09:20:26Z) - Model-agnostic explainable artificial intelligence for object detection in image data [8.042562891309414]
Black-box explanation method named Black-box Object Detection Explanation by Masking (BODEM)
We propose a hierarchical random masking framework in which coarse-grained masks are used in lower levels to find salient regions within an image.
Experimentations on various object detection datasets and models showed that BODEM can effectively explain the behavior of object detectors.
arXiv Detail & Related papers (2023-03-30T09:29:03Z) - ORF-Net: Deep Omni-supervised Rib Fracture Detection from Chest CT Scans [47.7670302148812]
radiologists need to investigate and annotate rib fractures on a slice-by-slice basis.
We propose a novel omni-supervised object detection network, which can exploit multiple different forms of annotated data.
Our proposed method outperforms other state-of-the-art approaches consistently.
arXiv Detail & Related papers (2022-07-05T07:06:57Z) - Black-box Explanation of Object Detectors via Saliency Maps [66.745167677293]
We propose D-RISE, a method for generating visual explanations for the predictions of object detectors.
We show that D-RISE can be easily applied to different object detectors including one-stage detectors such as YOLOv3 and two-stage detectors such as Faster-RCNN.
arXiv Detail & Related papers (2020-06-05T02:13:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.