XC: Exploring Quantitative Use Cases for Explanations in 3D Object
Detection
- URL: http://arxiv.org/abs/2210.11590v1
- Date: Thu, 20 Oct 2022 21:02:55 GMT
- Title: XC: Exploring Quantitative Use Cases for Explanations in 3D Object
Detection
- Authors: Sunsheng Gu, Vahdat Abdelzad, Krzysztof Czarnecki
- Abstract summary: We propose a set of measures, named Explanation Concentration (XC) scores, that can be used for downstream tasks.
XC scores quantify the concentration of attributions within the boundaries of detected objects.
We evaluate effectiveness of XC scores via the task of distinguishing true positive (TP) and false positive (FP) detected objects in the KIT and datasets.
- Score: 10.47625686392663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) methods are frequently applied to obtain qualitative
insights about deep models' predictions. However, such insights need to be
interpreted by a human observer to be useful. In this paper, we aim to use
explanations directly to make decisions without human observers. We adopt two
gradient-based explanation methods, Integrated Gradients (IG) and backprop, for
the task of 3D object detection. Then, we propose a set of quantitative
measures, named Explanation Concentration (XC) scores, that can be used for
downstream tasks. These scores quantify the concentration of attributions
within the boundaries of detected objects. We evaluate the effectiveness of XC
scores via the task of distinguishing true positive (TP) and false positive
(FP) detected objects in the KITTI and Waymo datasets. The results demonstrate
an improvement of more than 100\% on both datasets compared to other heuristics
such as random guesses and the number of LiDAR points in the bounding box,
raising confidence in XC's potential for application in more use cases. Our
results also indicate that computationally expensive XAI methods like IG may
not be more valuable when used quantitatively compare to simpler methods.
Related papers
- ODExAI: A Comprehensive Object Detection Explainable AI Evaluation [1.338174941551702]
We introduce the Object Detection Explainable AI Evaluation (ODExAI) to assess XAI methods in object detection.
We benchmark a set of XAI methods across two widely used object detectors and standard datasets.
arXiv Detail & Related papers (2025-04-27T14:16:14Z) - Choose Your Explanation: A Comparison of SHAP and GradCAM in Human Activity Recognition [0.13194391758295113]
This study compares Shapley Additive Explanations (SHAP) and Gradient-weighted Class Activation Mapping (Grad-CAM)
We quantitatively and quantitatively compare these methods, focusing on feature importance ranking, interpretability, and model sensitivity through perturbation experiments.
Our research demonstrates how SHAP and Grad-CAM could complement each other to provide more interpretable and actionable model explanations.
arXiv Detail & Related papers (2024-12-20T15:53:25Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - Improving Online Lane Graph Extraction by Object-Lane Clustering [106.71926896061686]
We propose an architecture and loss formulation to improve the accuracy of local lane graph estimates.
The proposed method learns to assign the objects to centerlines by considering the centerlines as cluster centers.
We show that our method can achieve significant performance improvements by using the outputs of existing 3D object detection methods.
arXiv Detail & Related papers (2023-07-20T15:21:28Z) - KECOR: Kernel Coding Rate Maximization for Active 3D Object Detection [48.66703222700795]
We resort to a novel kernel strategy to identify the most informative point clouds to acquire labels.
To accommodate both one-stage (i.e., SECOND) and two-stage detectors, we incorporate the classification entropy tangent and well trade-off between detection performance and the total number of bounding boxes selected for annotation.
Our results show that approximately 44% box-level annotation costs and 26% computational time are reduced compared to the state-of-the-art method.
arXiv Detail & Related papers (2023-07-16T04:27:03Z) - OccAM's Laser: Occlusion-based Attribution Maps for 3D Object Detectors
on LiDAR Data [8.486063950768694]
We propose a method to generate attribution maps for 3D object detection in LiDAR point clouds.
These maps indicate the importance of each 3D point in predicting the specific objects.
We show a detailed evaluation of the attribution maps and demonstrate that they are interpretable and highly informative.
arXiv Detail & Related papers (2022-04-13T18:00:30Z) - Probabilistic and Geometric Depth: Detecting Objects in Perspective [78.00922683083776]
3D object detection is an important capability needed in various practical applications such as driver assistance systems.
Monocular 3D detection, as an economical solution compared to conventional settings relying on binocular vision or LiDAR, has drawn increasing attention recently but still yields unsatisfactory results.
This paper first presents a systematic study on this problem and observes that the current monocular 3D detection problem can be simplified as an instance depth estimation problem.
arXiv Detail & Related papers (2021-07-29T16:30:33Z) - Evaluating Explainable Artificial Intelligence Methods for Multi-label
Deep Learning Classification Tasks in Remote Sensing [0.0]
We develop deep learning models with state-of-the-art performance in benchmark datasets.
Ten XAI methods were employed towards understanding and interpreting models' predictions.
Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods.
arXiv Detail & Related papers (2021-04-03T11:13:14Z) - Meta-Cognition-Based Simple And Effective Approach To Object Detection [4.68287703447406]
We explore a meta-cognitive learning strategy for object detection to improve generalization ability while at the same time maintaining detection speed.
The experimental results indicate an improvement in absolute precision of 2.6% (minimum), and 4.4% (maximum), with no overhead to inference time.
arXiv Detail & Related papers (2020-12-02T13:36:51Z) - DecAug: Augmenting HOI Detection via Decomposition [54.65572599920679]
Current algorithms suffer from insufficient training samples and category imbalance within datasets.
We propose an efficient and effective data augmentation method called DecAug for HOI detection.
Experiments show that our method brings up to 3.3 mAP and 1.6 mAP improvements on V-COCO and HICODET dataset.
arXiv Detail & Related papers (2020-10-02T13:59:05Z) - A Self-Training Approach for Point-Supervised Object Detection and
Counting in Crowds [54.73161039445703]
We propose a novel self-training approach that enables a typical object detector trained only with point-level annotations.
During training, we utilize the available point annotations to supervise the estimation of the center points of objects.
Experimental results show that our approach significantly outperforms state-of-the-art point-supervised methods under both detection and counting tasks.
arXiv Detail & Related papers (2020-07-25T02:14:42Z) - Task-agnostic Out-of-Distribution Detection Using Kernel Density
Estimation [10.238403787504756]
We propose a task-agnostic method to perform out-of-distribution (OOD) detection in deep neural networks (DNNs)
We estimate the probability density functions (pdfs) of intermediate features of a pre-trained DNN by performing kernel density estimation (KDE) on the training dataset.
At test time, we evaluate the pdfs on a test sample and produce a confidence score that indicates the sample is OOD.
arXiv Detail & Related papers (2020-06-18T17:46:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.