Extrinsic Camera Calibration with Semantic Segmentation
- URL: http://arxiv.org/abs/2208.03949v1
- Date: Mon, 8 Aug 2022 07:25:03 GMT
- Title: Extrinsic Camera Calibration with Semantic Segmentation
- Authors: Alexander Tsaregorodtsev, Johannes M\"uller, Jan Strohbeck, Martin
Herrmann, Michael Buchholz, Vasileios Belagiannis
- Abstract summary: We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
- Score: 60.330549990863624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monocular camera sensors are vital to intelligent vehicle operation and
automated driving assistance and are also heavily employed in traffic control
infrastructure. Calibrating the monocular camera, though, is time-consuming and
often requires significant manual intervention. In this work, we present an
extrinsic camera calibration approach that automatizes the parameter estimation
by utilizing semantic segmentation information from images and point clouds.
Our approach relies on a coarse initial measurement of the camera pose and
builds on lidar sensors mounted on a vehicle with high-precision localization
to capture a point cloud of the camera environment. Afterward, a mapping
between the camera and world coordinate spaces is obtained by performing a
lidar-to-camera registration of the semantically segmented sensor data. We
evaluate our method on simulated and real-world data to demonstrate low error
measurements in the calibration results. Our approach is suitable for
infrastructure sensors as well as vehicle sensors, while it does not require
motion of the camera platform.
Related papers
- Homography Guided Temporal Fusion for Road Line and Marking Segmentation [73.47092021519245]
Road lines and markings are frequently occluded in the presence of moving vehicles, shadow, and glare.
We propose a Homography Guided Fusion (HomoFusion) module to exploit temporally-adjacent video frames for complementary cues.
We show that exploiting available camera intrinsic data and ground plane assumption for cross-frame correspondence can lead to a light-weight network with significantly improved performances in speed and accuracy.
arXiv Detail & Related papers (2024-04-11T10:26:40Z) - Toward Planet-Wide Traffic Camera Calibration [20.69039245275091]
We present a framework that utilizes street-level imagery to reconstruct a metric 3D model, facilitating precise calibration of in-the-wild traffic cameras.
Notably, our framework achieves 3D scene reconstruction and accurate localization of over 100 global traffic cameras.
For evaluation, we introduce a dataset of 20 fully calibrated traffic cameras, demonstrating our method's significant enhancements over existing automatic calibration techniques.
arXiv Detail & Related papers (2023-11-06T23:01:02Z) - Homography Estimation in Complex Topological Scenes [6.023710971800605]
Surveillance videos and images are used for a broad set of applications, ranging from traffic analysis to crime detection.
Extrinsic camera calibration data is important for most analysis applications.
We present an automated camera-calibration process leveraging a dictionary-based approach that does not require prior knowledge on any camera settings.
arXiv Detail & Related papers (2023-08-02T11:31:43Z) - End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles [0.0]
CaLiCa is an end-to-end self-calibration network for Lidar and pinhole cameras.
We achieve 0.154 deg and 0.059 m accuracy with a reprojection error of 0.028 pixel with a single-pass inference.
arXiv Detail & Related papers (2023-04-24T19:44:23Z) - SceneCalib: Automatic Targetless Calibration of Cameras and Lidars in
Autonomous Driving [10.517099201352414]
SceneCalib is a novel method for simultaneous self-calibration of extrinsic and intrinsic parameters in a system containing multiple cameras and a lidar sensor.
We resolve issues with a fully automatic method that requires no explicit correspondences between camera images and lidar point clouds.
arXiv Detail & Related papers (2023-04-11T23:02:16Z) - Lasers to Events: Automatic Extrinsic Calibration of Lidars and Event
Cameras [67.84498757689776]
This paper presents the first direct calibration method between event cameras and lidars.
It removes dependencies on frame-based camera intermediaries and/or highly-accurate hand measurements.
arXiv Detail & Related papers (2022-07-03T11:05:45Z) - A Quality Index Metric and Method for Online Self-Assessment of
Autonomous Vehicles Sensory Perception [164.93739293097605]
We propose a novel evaluation metric, named as the detection quality index (DQI), which assesses the performance of camera-based object detection algorithms.
We have developed a superpixel-based attention network (SPA-NET) that utilizes raw image pixels and superpixels as input to predict the proposed DQI evaluation metric.
arXiv Detail & Related papers (2022-03-04T22:16:50Z) - Infrastructure-based Multi-Camera Calibration using Radial Projections [117.22654577367246]
Pattern-based calibration techniques can be used to calibrate the intrinsics of the cameras individually.
Infrastucture-based calibration techniques are able to estimate the extrinsics using 3D maps pre-built via SLAM or Structure-from-Motion.
We propose to fully calibrate a multi-camera system from scratch using an infrastructure-based approach.
arXiv Detail & Related papers (2020-07-30T09:21:04Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.