MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal
calibration
- URL: http://arxiv.org/abs/2303.03056v3
- Date: Fri, 21 Jul 2023 14:45:20 GMT
- Title: MOISST: Multimodal Optimization of Implicit Scene for SpatioTemporal
calibration
- Authors: Quentin Herau, Nathan Piasco, Moussab Bennehar, Luis Rold\~ao, Dzmitry
Tsishkou, Cyrille Migniot, Pascal Vasseur and C\'edric Demonceaux
- Abstract summary: We take advantage of recent advances in computer graphics and implicit volumetric scene representation to tackle the problem of multi-sensor spatial and temporal calibration.
Our method enables accurate and robust calibration from data captured in uncontrolled and unstructured urban environments.
We demonstrate the accuracy and robustness of our method in urban scenes typically encountered in autonomous driving scenarios.
- Score: 4.405687114738899
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the recent advances in autonomous driving and the decreasing cost of
LiDARs, the use of multimodal sensor systems is on the rise. However, in order
to make use of the information provided by a variety of complimentary sensors,
it is necessary to accurately calibrate them. We take advantage of recent
advances in computer graphics and implicit volumetric scene representation to
tackle the problem of multi-sensor spatial and temporal calibration. Thanks to
a new formulation of the Neural Radiance Field (NeRF) optimization, we are able
to jointly optimize calibration parameters along with scene representation
based on radiometric and geometric measurements. Our method enables accurate
and robust calibration from data captured in uncontrolled and unstructured
urban environments, making our solution more scalable than existing calibration
solutions. We demonstrate the accuracy and robustness of our method in urban
scenes typically encountered in autonomous driving scenarios.
Related papers
- UniCal: Unified Neural Sensor Calibration [32.7372115947273]
Self-driving vehicles (SDVs) require accurate calibration of LiDARs and cameras to fuse sensor data accurately for autonomy.
Traditional calibration methods leverage fiducials captured in a controlled and structured scene and compute correspondences to optimize over.
We propose UniCal, a unified framework for effortlessly calibrating SDVs equipped with multiple LiDARs and cameras.
arXiv Detail & Related papers (2024-09-27T17:56:04Z) - MM3DGS SLAM: Multi-modal 3D Gaussian Splatting for SLAM Using Vision, Depth, and Inertial Measurements [59.70107451308687]
We show for the first time that using 3D Gaussians for map representation with unposed camera images and inertial measurements can enable accurate SLAM.
Our method, MM3DGS, addresses the limitations of prior rendering by enabling faster scale awareness, and improved trajectory tracking.
We also release a multi-modal dataset, UT-MM, collected from a mobile robot equipped with a camera and an inertial measurement unit.
arXiv Detail & Related papers (2024-04-01T04:57:41Z) - Joint Spatial-Temporal Calibration for Camera and Global Pose Sensor [0.4143603294943439]
In robotics, motion capture systems have been widely used to measure the accuracy of localization algorithms.
These functionalities require having accurate and reliable spatial-temporal calibration parameters between the camera and the global pose sensor.
In this study, we provide two novel solutions to estimate these calibration parameters.
arXiv Detail & Related papers (2024-03-01T20:56:14Z) - LiveHPS: LiDAR-based Scene-level Human Pose and Shape Estimation in Free
Environment [59.320414108383055]
We present LiveHPS, a novel single-LiDAR-based approach for scene-level human pose and shape estimation.
We propose a huge human motion dataset, named FreeMotion, which is collected in various scenarios with diverse human poses.
arXiv Detail & Related papers (2024-02-27T03:08:44Z) - SOAC: Spatio-Temporal Overlap-Aware Multi-Sensor Calibration using Neural Radiance Fields [10.958143040692141]
In rapidly-evolving domains such as autonomous driving, the use of multiple sensors with different modalities is crucial to ensure operational precision and stability.
To correctly exploit the provided information by each sensor in a single common frame, it is essential for these sensors to be accurately calibrated.
We leverage the ability of Neural Radiance Fields to represent different modalities in a common representation.
arXiv Detail & Related papers (2023-11-27T13:25:47Z) - Automated Automotive Radar Calibration With Intelligent Vehicles [73.15674960230625]
We present an approach for automated and geo-referenced calibration of automotive radar sensors.
Our method does not require external modifications of a vehicle and instead uses the location data obtained from automated vehicles.
Our evaluation on data from a real testing site shows that our method can correctly calibrate infrastructure sensors in an automated manner.
arXiv Detail & Related papers (2023-06-23T07:01:10Z) - Online Camera-to-ground Calibration for Autonomous Driving [26.357898919134833]
We propose an online monocular camera-to-ground calibration solution that does not utilize any specific targets while driving.
We provide metrics to quantify calibration performance and stopping criteria to report/broadcast our satisfying calibration results.
arXiv Detail & Related papers (2023-03-30T04:01:48Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - CRLF: Automatic Calibration and Refinement based on Line Feature for
LiDAR and Camera in Road Scenes [16.201111055979453]
We propose a novel method to calibrate the extrinsic parameter for LiDAR and camera in road scenes.
Our method introduces line features from static straight-line-shaped objects such as road lanes and poles in both image and point cloud.
We conduct extensive experiments on KITTI and our in-house dataset, quantitative and qualitative results demonstrate the robustness and accuracy of our method.
arXiv Detail & Related papers (2021-03-08T06:02:44Z) - Parameterized Temperature Scaling for Boosting the Expressive Power in
Post-Hoc Uncertainty Calibration [57.568461777747515]
We introduce a novel calibration method, Parametrized Temperature Scaling (PTS)
We demonstrate that the performance of accuracy-preserving state-of-the-art post-hoc calibrators is limited by their intrinsic expressive power.
We show with extensive experiments that our novel accuracy-preserving approach consistently outperforms existing algorithms across a large number of model architectures, datasets and metrics.
arXiv Detail & Related papers (2021-02-24T10:18:30Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.