Spherical formulation of moving object geometric constraints for
monocular fisheye cameras
- URL: http://arxiv.org/abs/2003.03262v1
- Date: Fri, 6 Mar 2020 14:59:38 GMT
- Title: Spherical formulation of moving object geometric constraints for
monocular fisheye cameras
- Authors: Letizia Mariotti and Ciaran Hughes
- Abstract summary: We introduce a moving object detection algorithm for fisheye cameras used in autonomous driving.
We reformulate the three commonly used constraints in rectilinear images to spherical coordinates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a moving object detection algorithm for fisheye
cameras used in autonomous driving. We reformulate the three commonly used
constraints in rectilinear images (epipolar, positive depth and positive height
constraints) to spherical coordinates which is invariant to specific camera
configuration once the calibration is known. One of the main challenging use
case in autonomous driving is to detect parallel moving objects which suffer
from motion-parallax ambiguity. To alleviate this, we formulate an additional
fourth constraint, called the anti-parallel constraint, which aids the
detection of objects with motion that mirrors the ego-vehicle possible. We
analyze the proposed algorithm in different scenarios and demonstrate that it
works effectively operating directly on fisheye images.
Related papers
- Multi-Object Tracking with Camera-LiDAR Fusion for Autonomous Driving [0.764971671709743]
The proposed MOT algorithm comprises a three-step association process, an Extended Kalman filter for estimating the motion of each detected dynamic obstacle, and a track management phase.
Unlike most state-of-the-art multi-modal MOT approaches, the proposed algorithm does not rely on maps or knowledge of the ego global pose.
The algorithm is validated both in simulation and with real-world data, with satisfactory results.
arXiv Detail & Related papers (2024-03-06T23:49:16Z) - Visually Guided Object Grasping [19.71383212064634]
We show how to represent a grasp or more generally, an alignment between two solids in 3-D projective space using an uncalibrated stereo rig.
We perform an analysis of the performances of the visual servoing algorithm and of the grasping precision that can be expected from this type of approach.
arXiv Detail & Related papers (2023-11-21T15:08:17Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - PolarFormer: Multi-camera 3D Object Detection with Polar Transformers [93.49713023975727]
3D object detection in autonomous driving aims to reason "what" and "where" the objects of interest present in a 3D world.
Existing methods often adopt the canonical Cartesian coordinate system with perpendicular axis.
We propose a new Polar Transformer (PolarFormer) for more accurate 3D object detection in the bird's-eye-view (BEV) taking as input only multi-camera 2D images.
arXiv Detail & Related papers (2022-06-30T16:32:48Z) - Image-to-Lidar Self-Supervised Distillation for Autonomous Driving Data [80.14669385741202]
We propose a self-supervised pre-training method for 3D perception models tailored to autonomous driving data.
We leverage the availability of synchronized and calibrated image and Lidar sensors in autonomous driving setups.
Our method does not require any point cloud nor image annotations.
arXiv Detail & Related papers (2022-03-30T12:40:30Z) - Attentive and Contrastive Learning for Joint Depth and Motion Field
Estimation [76.58256020932312]
Estimating the motion of the camera together with the 3D structure of the scene from a monocular vision system is a complex task.
We present a self-supervised learning framework for 3D object motion field estimation from monocular videos.
arXiv Detail & Related papers (2021-10-13T16:45:01Z) - Spherical formulation of geometric motion segmentation constraints in
fisheye cameras [0.0]
We introduce a visual motion segmentation method employing spherical geometry for fisheye cameras and automoated driving.
Results are presented and analyzed that demonstrate that the proposal is an effective motion segmentation approach for direct employment on fisheye imagery.
arXiv Detail & Related papers (2021-04-26T08:48:12Z) - Calibrated and Partially Calibrated Semi-Generalized Homographies [65.29477277713205]
We propose the first minimal solutions for estimating the semi-generalized homography given a perspective and a generalized camera.
The proposed solvers are stable and efficient as demonstrated by a number of synthetic and real-world experiments.
arXiv Detail & Related papers (2021-03-11T08:56:24Z) - Generalized Object Detection on Fisheye Cameras for Autonomous Driving:
Dataset, Representations and Baseline [5.1450366450434295]
We explore better representations like oriented bounding box, ellipse, and generic polygon for object detection in fisheye images.
We design a novel curved bounding box model that has optimal properties for fisheye distortion models.
It is the first detailed study on object detection on fisheye cameras for autonomous driving scenarios.
arXiv Detail & Related papers (2020-12-03T18:00:16Z) - Road Curb Detection and Localization with Monocular Forward-view Vehicle
Camera [74.45649274085447]
We propose a robust method for estimating road curb 3D parameters using a calibrated monocular camera equipped with a fisheye lens.
Our approach is able to estimate the vehicle to curb distance in real time with mean accuracy of more than 90%.
arXiv Detail & Related papers (2020-02-28T00:24:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.