Surround-view Fisheye Camera Perception for Automated Driving: Overview,
Survey and Challenges
- URL: http://arxiv.org/abs/2205.13281v1
- Date: Thu, 26 May 2022 11:38:04 GMT
- Title: Surround-view Fisheye Camera Perception for Automated Driving: Overview,
Survey and Challenges
- Authors: Varun Ravi Kumar, Ciaran Eising, Christian Witt, and Senthil Yogamani
- Abstract summary: Four fisheye cameras on four sides of the vehicle are sufficient to cover 360deg around the vehicle capturing the entire near-field region.
Some primary use cases are automated parking, traffic jam assist, and urban driving.
Due to the large radial distortion of fisheye cameras, standard algorithms can not be extended easily to the surround-view use case.
- Score: 1.4452405977630436
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surround-view fisheye cameras are commonly used for near-field sensing in
automated driving. Four fisheye cameras on four sides of the vehicle are
sufficient to cover 360{\deg} around the vehicle capturing the entire
near-field region. Some primary use cases are automated parking, traffic jam
assist, and urban driving. There are limited datasets and very little work on
near-field perception tasks as the main focus in automotive perception is on
far-field perception. In contrast to far-field, surround-view perception poses
additional challenges due to high precision object detection requirements of
10cm and partial visibility of objects. Due to the large radial distortion of
fisheye cameras, standard algorithms can not be extended easily to the
surround-view use case. Thus we are motivated to provide a self-contained
reference for automotive fisheye camera perception for researchers and
practitioners. Firstly, we provide a unified and taxonomic treatment of
commonly used fisheye camera models. Secondly, we discuss various perception
tasks and existing literature. Finally, we discuss the challenges and future
direction.
Related papers
- FisheyeDetNet: 360° Surround view Fisheye Camera based Object Detection System for Autonomous Driving [4.972459365804512]
Object detection is a mature problem in autonomous driving with pedestrian detection being one of the first deployed algorithms.
Standard bounding box representation fails in fisheye cameras due to heavy radial distortion, particularly in the periphery.
We design rotated bounding boxes, ellipse, generic polygon as polar arc/angle representations and define an instance segmentation mIOU metric to analyze these representations.
The proposed model FisheyeDetNet with polygon outperforms others and achieves a mAP score of 49.5 % on Valeo fisheye surround-view dataset for automated driving applications.
arXiv Detail & Related papers (2024-04-20T18:50:57Z) - The 8th AI City Challenge [57.25825945041515]
The 2024 edition featured five tracks, attracting unprecedented interest from 726 teams in 47 countries and regions.
The challenge utilized two leaderboards to showcase methods, with participants setting new benchmarks.
arXiv Detail & Related papers (2024-04-15T03:12:17Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Parking Spot Classification based on surround view camera system [1.1984905847118061]
We tackle parking spot classification based on the surround view camera system.
We adapt the object detection neural network YOLOv4 with a novel polygon bounding box model.
Results prove that our proposed classification approach is effective to distinguish between regular, electric vehicle, and handicap parking spots.
arXiv Detail & Related papers (2023-10-05T07:15:04Z) - FisheyePP4AV: A privacy-preserving method for autonomous vehicles on
fisheye camera images [1.534667887016089]
In many parts of the world, the use of vast amounts of data collected on public roadways for autonomous driving has increased.
In order to detect and anonymize pedestrian faces and nearby car license plates in actual road-driving scenarios, there is an urgent need for effective solutions.
In this work, we pay particular attention to protecting privacy while yet adhering to several laws for fisheye camera photos taken by driverless vehicles.
arXiv Detail & Related papers (2023-09-07T15:51:31Z) - Streaming Object Detection on Fisheye Cameras for Automatic Parking [0.0]
We propose a real-time detection framework equipped with a dualflow perception module that can predict the future and alleviate the time-lag problem.
The standard bounding box is unsuitable for the object in fisheye camera images due to the strong radial distortion of the fisheye camera.
We propose a new periodic angle loss function to regress the angle of the box, which is the simple and accurate representation method of objects.
arXiv Detail & Related papers (2023-05-24T04:30:25Z) - Learning Active Camera for Multi-Object Navigation [94.89618442412247]
Getting robots to navigate to multiple objects autonomously is essential yet difficult in robot applications.
Existing navigation methods mainly focus on fixed cameras and few attempts have been made to navigate with active cameras.
In this paper, we consider navigating to multiple objects more efficiently with active cameras.
arXiv Detail & Related papers (2022-10-14T04:17:30Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Rope3D: TheRoadside Perception Dataset for Autonomous Driving and
Monocular 3D Object Detection Task [48.555440807415664]
We present the first high-diversity challenging Roadside Perception 3D dataset- Rope3D from a novel view.
The dataset consists of 50k images and over 1.5M 3D objects in various scenes.
We propose to leverage the geometry constraint to solve the inherent ambiguities caused by various sensors, viewpoints.
arXiv Detail & Related papers (2022-03-25T12:13:23Z) - SVDistNet: Self-Supervised Near-Field Distance Estimation on Surround
View Fisheye Cameras [30.480562747903186]
A 360deg perception of scene geometry is essential for automated driving, notably for parking and urban driving scenarios.
We present novel camera-geometry adaptive multi-scale convolutions which utilize the camera parameters as a conditional input.
We evaluate our approach on the Fisheye WoodScape surround-view dataset, significantly improving over previous approaches.
arXiv Detail & Related papers (2021-04-09T15:20:20Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.