Automatic Estimation of Sphere Centers from Images of Calibrated Cameras
- URL: http://arxiv.org/abs/2002.10217v1
- Date: Mon, 24 Feb 2020 13:12:08 GMT
- Title: Automatic Estimation of Sphere Centers from Images of Calibrated Cameras
- Authors: Levente Hajder and Tekla T\'oth and Zolt\'an Pusztai
- Abstract summary: This paper deals with the automatic detection of ellipses in camera images, as well as to estimate the 3D position of the spheres corresponding to the detected 2D ellipses.
We propose two novel methods to (i) detect an ellipse in camera images and (ii) estimate the spatial location of the corresponding sphere if its size is known.
They are applied for calibrating the sensor system of autonomous cars equipped with digital cameras, depth sensors and LiDAR devices.
- Score: 11.816942730023886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Calibration of devices with different modalities is a key problem in robotic
vision. Regular spatial objects, such as planes, are frequently used for this
task. This paper deals with the automatic detection of ellipses in camera
images, as well as to estimate the 3D position of the spheres corresponding to
the detected 2D ellipses. We propose two novel methods to (i) detect an ellipse
in camera images and (ii) estimate the spatial location of the corresponding
sphere if its size is known. The algorithms are tested both quantitatively and
qualitatively. They are applied for calibrating the sensor system of autonomous
cars equipped with digital cameras, depth sensors and LiDAR devices.
Related papers
- R-C-P Method: An Autonomous Volume Calculation Method Using Image
Processing and Machine Vision [0.0]
Two cameras were used to measure the dimensions of a rectangular object in real-time.
The R-C-P method is developed using image processing and edge detection.
In addition to the surface areas, the R-C-P method also detects discontinuous edges or volumes.
arXiv Detail & Related papers (2023-08-19T15:39:27Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - Monocular 3D Object Detection with Depth from Motion [74.29588921594853]
We take advantage of camera ego-motion for accurate object depth estimation and detection.
Our framework, named Depth from Motion (DfM), then uses the established geometry to lift 2D image features to the 3D space and detects 3D objects thereon.
Our framework outperforms state-of-the-art methods by a large margin on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-26T15:48:46Z) - Robot Self-Calibration Using Actuated 3D Sensors [0.0]
This paper treats robot calibration as an offline SLAM problem, where scanning poses are linked to a fixed point in space by a moving kinematic chain.
As such, the presented framework allows robot calibration using nothing but an arbitrary eye-in-hand depth sensor.
A detailed evaluation of the system is shown on a real robot with various attached 3D sensors.
arXiv Detail & Related papers (2022-06-07T16:35:08Z) - 2D LiDAR and Camera Fusion Using Motion Cues for Indoor Layout
Estimation [2.6905021039717987]
A ground robot explores an indoor space with a single floor and vertical walls, and collects a sequence of intensity images and 2D LiDAR datasets.
The alignment of sensor outputs and image segmentation are computed jointly by aligning LiDAR points.
The ambiguity in images for ground-wall boundary extraction is removed with the assistance of LiDAR observations.
arXiv Detail & Related papers (2022-04-24T06:26:02Z) - Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data [80.14669385741202]
We propose a self-supervised pre-training method for 3D perception models tailored to autonomous driving data.
We leverage the availability of synchronized and calibrated image and Lidar sensors in autonomous driving setups.
Our method does not require any point cloud nor image annotations.
arXiv Detail & Related papers (2022-03-30T12:40:30Z) - High-level camera-LiDAR fusion for 3D object detection with machine
learning [0.0]
This paper tackles the 3D object detection problem, which is of vital importance for applications such as autonomous driving.
It uses a Machine Learning pipeline on a combination of monocular camera and LiDAR data to detect vehicles in the surrounding 3D space of a moving platform.
Our results demonstrate an efficient and accurate inference on a validation set, achieving an overall accuracy of 87.1%.
arXiv Detail & Related papers (2021-05-24T01:57:34Z) - Calibrated and Partially Calibrated Semi-Generalized Homographies [65.29477277713205]
We propose the first minimal solutions for estimating the semi-generalized homography given a perspective and a generalized camera.
The proposed solvers are stable and efficient as demonstrated by a number of synthetic and real-world experiments.
arXiv Detail & Related papers (2021-03-11T08:56:24Z) - PLUME: Efficient 3D Object Detection from Stereo Images [95.31278688164646]
Existing methods tackle the problem in two steps: first depth estimation is performed, a pseudo LiDAR point cloud representation is computed from the depth estimates, and then object detection is performed in 3D space.
We propose a model that unifies these two tasks in the same metric space.
Our approach achieves state-of-the-art performance on the challenging KITTI benchmark, with significantly reduced inference time compared with existing methods.
arXiv Detail & Related papers (2021-01-17T05:11:38Z) - 3D Object Localization Using 2D Estimates for Computer Vision
Applications [0.9543667840503739]
A technique for object localization based on pose estimation and camera calibration is presented.
The 3-dimensional (3D) coordinates are estimated by collecting multiple 2-dimensional (2D) images of the object and are utilized for the calibration of the camera.
arXiv Detail & Related papers (2020-09-24T01:50:24Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.