Toward Planet-Wide Traffic Camera Calibration
- URL: http://arxiv.org/abs/2311.04243v1
- Date: Mon, 6 Nov 2023 23:01:02 GMT
- Title: Toward Planet-Wide Traffic Camera Calibration
- Authors: Khiem Vuong, Robert Tamburo, Srinivasa G. Narasimhan
- Abstract summary: We present a framework that utilizes street-level imagery to reconstruct a metric 3D model, facilitating precise calibration of in-the-wild traffic cameras.
Notably, our framework achieves 3D scene reconstruction and accurate localization of over 100 global traffic cameras.
For evaluation, we introduce a dataset of 20 fully calibrated traffic cameras, demonstrating our method's significant enhancements over existing automatic calibration techniques.
- Score: 20.69039245275091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the widespread deployment of outdoor cameras, their potential for
automated analysis remains largely untapped due, in part, to calibration
challenges. The absence of precise camera calibration data, including intrinsic
and extrinsic parameters, hinders accurate real-world distance measurements
from captured videos. To address this, we present a scalable framework that
utilizes street-level imagery to reconstruct a metric 3D model, facilitating
precise calibration of in-the-wild traffic cameras. Notably, our framework
achieves 3D scene reconstruction and accurate localization of over 100 global
traffic cameras and is scalable to any camera with sufficient street-level
imagery. For evaluation, we introduce a dataset of 20 fully calibrated traffic
cameras, demonstrating our method's significant enhancements over existing
automatic calibration techniques. Furthermore, we highlight our approach's
utility in traffic analysis by extracting insights via 3D vehicle
reconstruction and speed measurement, thereby opening up the potential of using
outdoor cameras for automated analysis.
Related papers
- Multi-Modal Dataset Acquisition for Photometrically Challenging Object [56.30027922063559]
This paper addresses the limitations of current datasets for 3D vision tasks in terms of accuracy, size, realism, and suitable imaging modalities for photometrically challenging objects.
We propose a novel annotation and acquisition pipeline that enhances existing 3D perception and 6D object pose datasets.
arXiv Detail & Related papers (2023-08-21T10:38:32Z) - Homography Estimation in Complex Topological Scenes [6.023710971800605]
Surveillance videos and images are used for a broad set of applications, ranging from traffic analysis to crime detection.
Extrinsic camera calibration data is important for most analysis applications.
We present an automated camera-calibration process leveraging a dictionary-based approach that does not require prior knowledge on any camera settings.
arXiv Detail & Related papers (2023-08-02T11:31:43Z) - Automated Automotive Radar Calibration With Intelligent Vehicles [73.15674960230625]
We present an approach for automated and geo-referenced calibration of automotive radar sensors.
Our method does not require external modifications of a vehicle and instead uses the location data obtained from automated vehicles.
Our evaluation on data from a real testing site shows that our method can correctly calibrate infrastructure sensors in an automated manner.
arXiv Detail & Related papers (2023-06-23T07:01:10Z) - Continuous Online Extrinsic Calibration of Fisheye Camera and LiDAR [7.906477322731106]
An accurate extrinsic calibration is required to fuse the camera and LiDAR data into a common spatial reference frame required by high-level perception functions.
There is a need for continuous online extrinsic calibration algorithms which can automatically update the value of the camera-LiDAR calibration during the life of the vehicle using only sensor data.
We propose using mutual information between the camera image's depth estimate, provided by commonly available monocular depth estimation networks, and the LiDAR pointcloud's geometric distance as a optimization metric for extrinsic calibration.
arXiv Detail & Related papers (2023-06-22T23:16:31Z) - Automated Static Camera Calibration with Intelligent Vehicles [58.908194559319405]
We present a robust calibration method for automated geo-referenced camera calibration.
Our method requires a calibration vehicle equipped with a combined filtering/RTK receiver and an inertial measurement unit (IMU) for self-localization.
Our method does not require any human interaction with the information recorded by both the infrastructure and the vehicle.
arXiv Detail & Related papers (2023-04-21T08:50:52Z) - Online Camera-to-ground Calibration for Autonomous Driving [26.357898919134833]
We propose an online monocular camera-to-ground calibration solution that does not utilize any specific targets while driving.
We provide metrics to quantify calibration performance and stopping criteria to report/broadcast our satisfying calibration results.
arXiv Detail & Related papers (2023-03-30T04:01:48Z) - 3D Data Augmentation for Driving Scenes on Camera [50.41413053812315]
We propose a 3D data augmentation approach termed Drive-3DAug, aiming at augmenting the driving scenes on camera in the 3D space.
We first utilize Neural Radiance Field (NeRF) to reconstruct the 3D models of background and foreground objects.
Then, augmented driving scenes can be obtained by placing the 3D objects with adapted location and orientation at the pre-defined valid region of backgrounds.
arXiv Detail & Related papers (2023-03-18T05:51:05Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Practical Auto-Calibration for Spatial Scene-Understanding from
Crowdsourced Dashcamera Videos [1.0499611180329804]
We propose a system for practical monocular onboard camera auto-calibration from crowdsourced videos.
We show the effectiveness of our proposed system on the KITTI raw, Oxford RobotCar, and the crowdsourced D$2$-City datasets.
arXiv Detail & Related papers (2020-12-15T15:38:17Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.