RobustCalib: Robust Lidar-Camera Extrinsic Calibration with Consistency
Learning
- URL: http://arxiv.org/abs/2312.01085v1
- Date: Sat, 2 Dec 2023 09:29:50 GMT
- Title: RobustCalib: Robust Lidar-Camera Extrinsic Calibration with Consistency
Learning
- Authors: Shuang Xu, Sifan Zhou, Zhi Tian, Jizhou Ma, Qiong Nie, Xiangxiang Chu
- Abstract summary: Current methods for LiDAR-camera extrinsics estimation depend on offline targets and human efforts.
We propose a novel approach to address the extrinsic calibration problem in a robust, automatic, and single-shot manner.
We conduct comprehensive experiments on different datasets, and the results demonstrate that our method achieves accurate and robust performance.
- Score: 42.90987864456673
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Current traditional methods for LiDAR-camera extrinsics estimation depend on
offline targets and human efforts, while learning-based approaches resort to
iterative refinement for calibration results, posing constraints on their
generalization and application in on-board systems. In this paper, we propose a
novel approach to address the extrinsic calibration problem in a robust,
automatic, and single-shot manner. Instead of directly optimizing extrinsics,
we leverage the consistency learning between LiDAR and camera to implement
implicit re-calibartion. Specially, we introduce an appearance-consistency loss
and a geometric-consistency loss to minimizing the inconsitency between the
attrbutes (e.g., intensity and depth) of projected LiDAR points and the
predicted ones. This design not only enhances adaptability to various scenarios
but also enables a simple and efficient formulation during inference. We
conduct comprehensive experiments on different datasets, and the results
demonstrate that our method achieves accurate and robust performance. To
promote further research and development in this area, we will release our
model and code.
Related papers
- Optimal Transport-Based Displacement Interpolation with Data Augmentation for Reduced Order Modeling of Nonlinear Dynamical Systems [0.0]
We present a novel reduced-order Model (ROM) that exploits optimal transport theory and displacement to enhance the representation of nonlinear dynamics in complex systems.
We show improved accuracy and efficiency in predicting complex system behaviors, indicating the potential of this approach for a wide range of applications in computational physics and engineering.
arXiv Detail & Related papers (2024-11-13T16:29:33Z) - Improving Predictor Reliability with Selective Recalibration [15.319277333431318]
Recalibration is one of the most effective ways to produce reliable confidence estimates with a pre-trained model.
We propose textitselective recalibration, where a selection model learns to reject some user-chosen proportion of the data.
Our results show that selective recalibration consistently leads to significantly lower calibration error than a wide range of selection and recalibration baselines.
arXiv Detail & Related papers (2024-10-07T18:17:31Z) - Improving Data-aware and Parameter-aware Robustness for Continual Learning [3.480626767752489]
This paper analyzes that this insufficiency arises from the ineffective handling of outliers.
We propose a Robust Continual Learning (RCL) method to address this issue.
The proposed method effectively maintains robustness and achieves new state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2024-05-27T11:21:26Z) - On Task Performance and Model Calibration with Supervised and
Self-Ensembled In-Context Learning [71.44986275228747]
In-context learning (ICL) has become an efficient approach propelled by the recent advancements in large language models (LLMs)
However, both paradigms are prone to suffer from the critical problem of overconfidence (i.e., miscalibration)
arXiv Detail & Related papers (2023-12-21T11:55:10Z) - P2O-Calib: Camera-LiDAR Calibration Using Point-Pair Spatial Occlusion
Relationship [1.6921147361216515]
We propose a novel target-less calibration approach based on the 2D-3D edge point extraction using the occlusion relationship in 3D space.
Our method achieves low error and high robustness that can contribute to the practical applications relying on high-quality Camera-LiDAR calibration.
arXiv Detail & Related papers (2023-11-04T14:32:55Z) - Actively Learning Reinforcement Learning: A Stochastic Optimal Control Approach [3.453622106101339]
We propose a framework towards achieving two intertwined objectives: (i) equipping reinforcement learning with active exploration and deliberate information gathering, and (ii) overcoming the computational intractability of optimal control law.
We approach both objectives by using reinforcement learning to compute the optimal control law.
Unlike fixed exploration and exploitation balance, caution and probing are employed automatically by the controller in real-time, even after the learning process is terminated.
arXiv Detail & Related papers (2023-09-18T18:05:35Z) - Exploiting Diffusion Prior for Real-World Image Super-Resolution [75.5898357277047]
We present a novel approach to leverage prior knowledge encapsulated in pre-trained text-to-image diffusion models for blind super-resolution.
By employing our time-aware encoder, we can achieve promising restoration results without altering the pre-trained synthesis model.
arXiv Detail & Related papers (2023-05-11T17:55:25Z) - Controlled Sparsity via Constrained Optimization or: How I Learned to
Stop Tuning Penalties and Love Constraints [81.46143788046892]
We focus on the task of controlling the level of sparsity when performing sparse learning.
Existing methods based on sparsity-inducing penalties involve expensive trial-and-error tuning of the penalty factor.
We propose a constrained formulation where sparsification is guided by the training objective and the desired sparsity target in an end-to-end fashion.
arXiv Detail & Related papers (2022-08-08T21:24:20Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.