Surround-View Fisheye Optics in Computer Vision and Simulation: Survey
and Challenges
- URL: http://arxiv.org/abs/2402.12041v2
- Date: Wed, 21 Feb 2024 14:48:28 GMT
- Title: Surround-View Fisheye Optics in Computer Vision and Simulation: Survey
and Challenges
- Authors: Daniel Jakab, Brian Michael Deegan, Sushil Sharma, Eoin Martino Grua,
Jonathan Horgan, Enda Ward, Pepijn Van De Ven, Anthony Scanlan, Ciar\'an
Eising
- Abstract summary: The automotive industry has advanced in applying state-of-the-art computer vision to enhance road safety and provide automated driving functionality.
One crucial challenge in surround-view cameras is the strong optical aberrations of the fisheye camera.
A comprehensive dataset is needed for testing safety-critical scenarios in vehicle automation.
- Score: 1.2673797373220104
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we provide a survey on automotive surround-view fisheye
optics, with an emphasis on the impact of optical artifacts on computer vision
tasks in autonomous driving and ADAS. The automotive industry has advanced in
applying state-of-the-art computer vision to enhance road safety and provide
automated driving functionality. When using camera systems on vehicles, there
is a particular need for a wide field of view to capture the entire vehicle's
surroundings, in areas such as low-speed maneuvering, automated parking, and
cocoon sensing. However, one crucial challenge in surround-view cameras is the
strong optical aberrations of the fisheye camera, which is an area that has
received little attention in the literature. Additionally, a comprehensive
dataset is needed for testing safety-critical scenarios in vehicle automation.
The industry has turned to simulation as a cost-effective strategy for creating
synthetic datasets with surround-view camera imagery. We examine different
simulation methods (such as model-driven and data-driven simulations) and
discuss the simulators' ability (or lack thereof) to model real-world optical
performance. Overall, this paper highlights the optical aberrations in
automotive fisheye datasets, and the limitations of optical reality in
simulated fisheye datasets, with a focus on computer vision in surround-view
optical systems.
Related papers
- DrivingSphere: Building a High-fidelity 4D World for Closed-loop Simulation [54.02069690134526]
We propose DrivingSphere, a realistic and closed-loop simulation framework.
Its core idea is to build 4D world representation and generate real-life and controllable driving scenarios.
By providing a dynamic and realistic simulation environment, DrivingSphere enables comprehensive testing and validation of autonomous driving algorithms.
arXiv Detail & Related papers (2024-11-18T03:00:33Z) - Learning autonomous driving from aerial imagery [67.06858775696453]
Photogrammetric simulators allow the synthesis of novel views through the transformation of pre-generated assets into novel views.
We use a Neural Radiance Field (NeRF) as an intermediate representation to synthesize novel views from the point of view of a ground vehicle.
arXiv Detail & Related papers (2024-10-18T05:09:07Z) - Probing Multimodal LLMs as World Models for Driving [72.18727651074563]
We look at the application of Multimodal Large Language Models (MLLMs) in autonomous driving.
Despite advances in models like GPT-4o, their performance in complex driving environments remains largely unexplored.
arXiv Detail & Related papers (2024-05-09T17:52:42Z) - Measuring Natural Scenes SFR of Automotive Fisheye Cameras [0.30786914102688595]
The Modulation Transfer Function (MTF) is an important image quality metric typically used in the automotive domain.
Wide field-of-view (FOV) cameras have become increasingly popular, particularly for low-speed vehicle automation applications.
This paper proposes an adaptation of the Natural Scenes Spatial Frequency Response (NS-SFR) algorithm to suit cameras with a wide field-of-view.
arXiv Detail & Related papers (2024-01-10T15:59:59Z) - Street-View Image Generation from a Bird's-Eye View Layout [95.36869800896335]
Bird's-Eye View (BEV) Perception has received increasing attention in recent years.
Data-driven simulation for autonomous driving has been a focal point of recent research.
We propose BEVGen, a conditional generative model that synthesizes realistic and spatially consistent surrounding images.
arXiv Detail & Related papers (2023-01-11T18:39:34Z) - Optical Flow for Autonomous Driving: Applications, Challenges and
Improvements [0.9023847175654602]
We propose and evaluate training strategies to improve a learning-based optical flow algorithm.
While trained with synthetic data, the model demonstrates strong capabilities to generalize to real world fisheye data.
We propose a novel, generic semi-supervised framework that significantly boosts performances of existing methods in low light.
arXiv Detail & Related papers (2023-01-11T12:01:42Z) - Surround-view Fisheye Camera Perception for Automated Driving: Overview,
Survey and Challenges [1.4452405977630436]
Four fisheye cameras on four sides of the vehicle are sufficient to cover 360deg around the vehicle capturing the entire near-field region.
Some primary use cases are automated parking, traffic jam assist, and urban driving.
Due to the large radial distortion of fisheye cameras, standard algorithms can not be extended easily to the surround-view use case.
arXiv Detail & Related papers (2022-05-26T11:38:04Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - SurfelGAN: Synthesizing Realistic Sensor Data for Autonomous Driving [27.948417322786575]
We present a simple yet effective approach to generate realistic scenario sensor data.
Our approach uses texture-mapped surfels to efficiently reconstruct the scene from an initial vehicle pass or set of passes.
We then leverage a SurfelGAN network to reconstruct realistic camera images for novel positions and orientations of the self-driving vehicle.
arXiv Detail & Related papers (2020-05-08T04:01:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.