Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method
Using Segment Anything
- URL: http://arxiv.org/abs/2306.02656v1
- Date: Mon, 5 Jun 2023 07:42:53 GMT
- Title: Calib-Anything: Zero-training LiDAR-Camera Extrinsic Calibration Method
Using Segment Anything
- Authors: Zhaotong Luo, Guohang Yan and Yikang Li
- Abstract summary: We propose a novel LiDAR-camera calibration method, which requires zero extra training and adapts to common scenes.
The consistency includes three properties of the point cloud: the intensity, normal vector and categories.
Experiments on different dataset have demonstrated the generality and comparable accuracy of our method.
- Score: 2.8861987395149247
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The research on extrinsic calibration between Light Detection and
Ranging(LiDAR) and camera are being promoted to a more accurate, automatic and
generic manner. Since deep learning has been employed in calibration, the
restrictions on the scene are greatly reduced. However, data driven method has
the drawback of low transfer-ability. It cannot adapt to dataset variations
unless additional training is taken. With the advent of foundation model, this
problem can be significantly mitigated. By using the Segment Anything
Model(SAM), we propose a novel LiDAR-camera calibration method, which requires
zero extra training and adapts to common scenes. With an initial guess, we
opimize the extrinsic parameter by maximizing the consistency of points that
are projected inside each image mask. The consistency includes three properties
of the point cloud: the intensity, normal vector and categories derived from
some segmentation methods. The experiments on different dataset have
demonstrated the generality and comparable accuracy of our method. The code is
available at https://github.com/OpenCalib/CalibAnything.
Related papers
- AnyCalib: On-Manifold Learning for Model-Agnostic Single-View Camera Calibration [9.192660643226372]
We present AnyCalib, a method for calibrating the intrinsic parameters of a camera from a single in-the-wild image.
We show, for the first time, that this intermediate representation allows for a closed-form recovery of the intrinsics for a wide range of camera models.
arXiv Detail & Related papers (2025-03-16T23:59:21Z) - Shelf-Supervised Cross-Modal Pre-Training for 3D Object Detection [52.66283064389691]
State-of-the-art 3D object detectors are often trained on massive labeled datasets.
Recent works demonstrate that self-supervised pre-training with unlabeled data can improve detection accuracy with limited labels.
We propose a shelf-supervised approach for generating zero-shot 3D bounding boxes from paired RGB and LiDAR data.
arXiv Detail & Related papers (2024-06-14T15:21:57Z) - Camera Calibration through Geometric Constraints from Rotation and
Projection Matrices [4.100632594106989]
We propose a novel constraints-based loss for measuring the intrinsic and extrinsic parameters of a camera.
Our methodology is a hybrid approach that employs the learning power of a neural network to estimate the desired parameters.
Our proposed approach demonstrates improvements across all parameters when compared to the state-of-the-art (SOTA) benchmarks.
arXiv Detail & Related papers (2024-02-13T13:07:34Z) - CalibFormer: A Transformer-based Automatic LiDAR-Camera Calibration Network [11.602943913324653]
CalibFormer is an end-to-end network for automatic LiDAR-camera calibration.
We aggregate multiple layers of camera and LiDAR image features to achieve high-resolution representations.
Our method achieved a mean translation error of $0.8751 mathrmcm$ and a mean rotation error of $0.0562 circ$ on the KITTI dataset.
arXiv Detail & Related papers (2023-11-26T08:59:30Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Class-wise and reduced calibration methods [0.0]
We show how a reduced calibration method transforms the original problem into a simpler one.
Second, we propose class-wise calibration methods, based on building on a phenomenon called neural collapse.
Applying the two methods together results in class-wise reduced calibration algorithms, which are powerful tools for reducing the prediction and per-class calibration errors.
arXiv Detail & Related papers (2022-10-07T17:13:17Z) - Self-Calibrating Neural Radiance Fields [68.64327335620708]
We jointly learn the geometry of the scene and the accurate camera parameters without any calibration objects.
Our camera model consists of a pinhole model, a fourth order radial distortion, and a generic noise model that can learn arbitrary non-linear camera distortions.
arXiv Detail & Related papers (2021-08-31T13:34:28Z) - Black-Box Diagnosis and Calibration on GAN Intra-Mode Collapse: A Pilot
Study [116.05514467222544]
Generative adversarial networks (GANs) nowadays are capable of producing images of incredible realism.
One concern raised is whether the state-of-the-art GAN's learned distribution still suffers from mode collapse.
This paper explores to diagnose GAN intra-mode collapse and calibrate that, in a novel black-box setting.
arXiv Detail & Related papers (2021-07-23T06:03:55Z) - Joint Noise-Tolerant Learning and Meta Camera Shift Adaptation for
Unsupervised Person Re-Identification [60.36551512902312]
unsupervised person re-identification (re-ID) aims to learn discriminative models with unlabeled data.
One popular method is to obtain pseudo-label by clustering and use them to optimize the model.
In this paper, we propose a unified framework to solve both problems.
arXiv Detail & Related papers (2021-03-08T09:13:06Z) - Uncertainty Quantification and Deep Ensembles [79.4957965474334]
We show that deep-ensembles do not necessarily lead to improved calibration properties.
We show that standard ensembling methods, when used in conjunction with modern techniques such as mixup regularization, can lead to less calibrated models.
This text examines the interplay between three of the most simple and commonly used approaches to leverage deep learning when data is scarce.
arXiv Detail & Related papers (2020-07-17T07:32:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.