Measuring Natural Scenes SFR of Automotive Fisheye Cameras
- URL: http://arxiv.org/abs/2401.05232v1
- Date: Wed, 10 Jan 2024 15:59:59 GMT
- Title: Measuring Natural Scenes SFR of Automotive Fisheye Cameras
- Authors: Daniel Jakab, Eoin Martino Grua, Brian Micheal Deegan, Anthony
Scanlan, Pepijn Van De Ven, and Ciar\'an Eising
- Abstract summary: The Modulation Transfer Function (MTF) is an important image quality metric typically used in the automotive domain.
Wide field-of-view (FOV) cameras have become increasingly popular, particularly for low-speed vehicle automation applications.
This paper proposes an adaptation of the Natural Scenes Spatial Frequency Response (NS-SFR) algorithm to suit cameras with a wide field-of-view.
- Score: 0.30786914102688595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Modulation Transfer Function (MTF) is an important image quality metric
typically used in the automotive domain. However, despite the fact that optical
quality has an impact on the performance of computer vision in vehicle
automation, for many public datasets, this metric is unknown. Additionally,
wide field-of-view (FOV) cameras have become increasingly popular, particularly
for low-speed vehicle automation applications. To investigate image quality in
datasets, this paper proposes an adaptation of the Natural Scenes Spatial
Frequency Response (NS-SFR) algorithm to suit cameras with a wide
field-of-view.
Related papers
- Surround-View Fisheye Optics in Computer Vision and Simulation: Survey
and Challenges [1.2673797373220104]
The automotive industry has advanced in applying state-of-the-art computer vision to enhance road safety and provide automated driving functionality.
One crucial challenge in surround-view cameras is the strong optical aberrations of the fisheye camera.
A comprehensive dataset is needed for testing safety-critical scenarios in vehicle automation.
arXiv Detail & Related papers (2024-02-19T10:56:28Z) - E2HQV: High-Quality Video Generation from Event Camera via
Theory-Inspired Model-Aided Deep Learning [53.63364311738552]
Bio-inspired event cameras or dynamic vision sensors are capable of capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range.
It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization.
We propose textbfE2HQV, a novel E2V paradigm designed to produce high-quality video frames from events.
arXiv Detail & Related papers (2024-01-16T05:10:50Z) - Homography Estimation in Complex Topological Scenes [6.023710971800605]
Surveillance videos and images are used for a broad set of applications, ranging from traffic analysis to crime detection.
Extrinsic camera calibration data is important for most analysis applications.
We present an automated camera-calibration process leveraging a dictionary-based approach that does not require prior knowledge on any camera settings.
arXiv Detail & Related papers (2023-08-02T11:31:43Z) - Windscreen Optical Quality for AI Algorithms: Refractive Power and MTF
not Sufficient [74.2843502619298]
Automotive mass production processes require measurement systems that characterize the optical quality of the windscreens in a meaningful way.
In this article we demonstrate that the main metric established in the industry - refractive power - is fundamentally not capable of capturing relevant optical properties of windscreens.
We propose a novel concept to determine the optical quality of windscreens and to use simulation to link this optical quality to the performance of AI algorithms.
arXiv Detail & Related papers (2023-05-23T20:41:04Z) - A Feature-based Approach for the Recognition of Image Quality
Degradation in Automotive Applications [0.0]
This paper presents a feature-based algorithm to detect certain effects that can degrade image quality in automotive applications.
Experiments with different data sets show that the algorithm can detect soiling adhering to camera lenses and classify different types of image degradation.
arXiv Detail & Related papers (2023-03-13T13:40:09Z) - Extrinsic Camera Calibration with Semantic Segmentation [60.330549990863624]
We present an extrinsic camera calibration approach that automatizes the parameter estimation by utilizing semantic segmentation information.
Our approach relies on a coarse initial measurement of the camera pose and builds on lidar sensors mounted on a vehicle.
We evaluate our method on simulated and real-world data to demonstrate low error measurements in the calibration results.
arXiv Detail & Related papers (2022-08-08T07:25:03Z) - A Quality Index Metric and Method for Online Self-Assessment of
Autonomous Vehicles Sensory Perception [164.93739293097605]
We propose a novel evaluation metric, named as the detection quality index (DQI), which assesses the performance of camera-based object detection algorithms.
We have developed a superpixel-based attention network (SPA-NET) that utilizes raw image pixels and superpixels as input to predict the proposed DQI evaluation metric.
arXiv Detail & Related papers (2022-03-04T22:16:50Z) - TransCamP: Graph Transformer for 6-DoF Camera Pose Estimation [77.09542018140823]
We propose a neural network approach with a graph transformer backbone, namely TransCamP, to address the camera relocalization problem.
TransCamP effectively fuses the image features, camera pose information and inter-frame relative camera motions into encoded graph attributes.
arXiv Detail & Related papers (2021-05-28T19:08:43Z) - DSEC: A Stereo Event Camera Dataset for Driving Scenarios [55.79329250951028]
This work presents the first high-resolution, large-scale stereo dataset with event cameras.
The dataset contains 53 sequences collected by driving in a variety of illumination conditions.
It provides ground truth disparity for the development and evaluation of event-based stereo algorithms.
arXiv Detail & Related papers (2021-03-10T12:10:33Z) - Characterisation of CMOS Image Sensor Performance in Low Light
Automotive Applications [1.933681537640272]
This paper outlines a systematic method for characterisation of state of the art image sensor performance in response to noise.
The experiment outlined in this paper demonstrates how this method can be used to characterise the performance of CMOS image sensors in response to electrical noise on the power supply lines.
arXiv Detail & Related papers (2020-11-24T22:49:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.