Uncertainty-Aware Knowledge Distillation for Compact and Efficient 6DoF Pose Estimation
- URL: http://arxiv.org/abs/2503.13053v1
- Date: Mon, 17 Mar 2025 10:56:30 GMT
- Title: Uncertainty-Aware Knowledge Distillation for Compact and Efficient 6DoF Pose Estimation
- Authors: Nassim Ali Ousalah, Anis Kacem, Enjie Ghorbel, Emmanuel Koumandakis, Djamila Aouada,
- Abstract summary: This paper introduces a novel uncertainty-aware end-to-end Knowledge Distillation (KD) framework focused on keypoint-based 6DoF pose estimation.<n>We propose a distillation strategy that aligns the student and teacher predictions by adjusting the knowledge transfer based on the uncertainty associated with each teacher keypoint prediction.<n> Experiments on the widely-used LINEMOD benchmark demonstrate the effectiveness of our method, achieving superior 6DoF object pose estimation with lightweight models.
- Score: 9.742944501209656
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compact and efficient 6DoF object pose estimation is crucial in applications such as robotics, augmented reality, and space autonomous navigation systems, where lightweight models are critical for real-time accurate performance. This paper introduces a novel uncertainty-aware end-to-end Knowledge Distillation (KD) framework focused on keypoint-based 6DoF pose estimation. Keypoints predicted by a large teacher model exhibit varying levels of uncertainty that can be exploited within the distillation process to enhance the accuracy of the student model while ensuring its compactness. To this end, we propose a distillation strategy that aligns the student and teacher predictions by adjusting the knowledge transfer based on the uncertainty associated with each teacher keypoint prediction. Additionally, the proposed KD leverages this uncertainty-aware alignment of keypoints to transfer the knowledge at key locations of their respective feature maps. Experiments on the widely-used LINEMOD benchmark demonstrate the effectiveness of our method, achieving superior 6DoF object pose estimation with lightweight models compared to state-of-the-art approaches. Further validation on the SPEED+ dataset for spacecraft pose estimation highlights the robustness of our approach under diverse 6DoF pose estimation scenarios.
Related papers
- Distilling 3D distinctive local descriptors for 6D pose estimation [5.754251195342313]
We introduce a knowledge distillation framework that trains an efficient student model to regress local descriptors from a GeDi teacher.
We validate our approach on five BOP Benchmark datasets and demonstrate a significant reduction in inference time.
arXiv Detail & Related papers (2025-03-19T11:04:37Z) - Knowledge Distillation with Adapted Weight [6.0635849782457925]
Large models are hard to deploy in a real-time system due to computational and energy constraints.<n>Knowledge distillation through Teacher-Student architecture offers a sustainable pathway to compress the knowledge of large models.<n>We propose the textbfKnowledge Distillation with Adaptive Influence Weight (KD-AIF) framework which leverages influence functions to assign weights to training data.
arXiv Detail & Related papers (2025-01-06T01:16:07Z) - Calib3D: Calibrating Model Preferences for Reliable 3D Scene Understanding [55.32861154245772]
Calib3D is a pioneering effort to benchmark and scrutinize the reliability of 3D scene understanding models.<n>We comprehensively evaluate 28 state-of-the-art models across 10 diverse 3D datasets.<n>We introduce DeptS, a novel depth-aware scaling approach aimed at enhancing 3D model calibration.
arXiv Detail & Related papers (2024-03-25T17:59:59Z) - Learn from the Past: A Proxy Guided Adversarial Defense Framework with
Self Distillation Regularization [53.04697800214848]
Adversarial Training (AT) is pivotal in fortifying the robustness of deep learning models.
AT methods, relying on direct iterative updates for target model's defense, frequently encounter obstacles such as unstable training and catastrophic overfitting.
We present a general proxy guided defense framework, LAST' (bf Learn from the Pbf ast)
arXiv Detail & Related papers (2023-10-19T13:13:41Z) - Manifold-Aware Self-Training for Unsupervised Domain Adaptation on
Regressing 6D Object Pose [69.14556386954325]
Domain gap between synthetic and real data in visual regression is bridged in this paper via global feature alignment and local refinement.
Our method incorporates an explicit self-supervised manifold regularization, revealing consistent cumulative target dependency across domains.
Learning unified implicit neural functions to estimate relative direction and distance of targets to their nearest class bins aims to refine target classification predictions.
arXiv Detail & Related papers (2023-05-18T08:42:41Z) - EvCenterNet: Uncertainty Estimation for Object Detection using
Evidential Learning [26.535329379980094]
EvCenterNet is a novel uncertainty-aware 2D object detection framework.
We employ evidential learning to estimate both classification and regression uncertainties.
We train our model on the KITTI dataset and evaluate it on challenging out-of-distribution datasets.
arXiv Detail & Related papers (2023-03-06T11:07:11Z) - PCKRF: Point Cloud Completion and Keypoint Refinement With Fusion Data for 6D Pose Estimation [33.226033672697795]
We propose Point Cloud Completion and Keypoint Refinement with Fusion Data (PCKRF), a new pose refinement pipeline for 6D pose estimation.
The PCKRF pipeline can be integrated with existing popular 6D pose estimation methods, such as the full flow bidirectional fusion network.
Our method exhibits superior stability compared to existing approaches when optimizing initial poses with relatively high precision.
arXiv Detail & Related papers (2022-10-07T10:13:30Z) - Knowledge Distillation for 6D Pose Estimation by Keypoint Distribution
Alignment [77.70208382044355]
We introduce the first knowledge distillation method for 6D pose estimation.
We observe the compact student network to struggle predicting precise 2D keypoint locations.
Our experiments on several benchmarks show that our distillation method yields state-of-the-art results.
arXiv Detail & Related papers (2022-05-30T10:17:17Z) - Spatial Attention Improves Iterative 6D Object Pose Estimation [52.365075652976735]
We propose a new method for 6D pose estimation refinement from RGB images.
Our main insight is that after the initial pose estimate, it is important to pay attention to distinct spatial features of the object.
We experimentally show that this approach learns to attend to salient spatial features and learns to ignore occluded parts of the object, leading to better pose estimation across datasets.
arXiv Detail & Related papers (2021-01-05T17:18:52Z) - Self6D: Self-Supervised Monocular 6D Object Pose Estimation [114.18496727590481]
We propose the idea of monocular 6D pose estimation by means of self-supervised learning.
We leverage recent advances in neural rendering to further self-supervise the model on unannotated real RGB-D data.
arXiv Detail & Related papers (2020-04-14T13:16:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.