Characterisation of CMOS Image Sensor Performance in Low Light
Automotive Applications
- URL: http://arxiv.org/abs/2011.12436v1
- Date: Tue, 24 Nov 2020 22:49:54 GMT
- Title: Characterisation of CMOS Image Sensor Performance in Low Light
Automotive Applications
- Authors: Shane Gilroy, John O'Dwyer and Lucas Bortoleto
- Abstract summary: This paper outlines a systematic method for characterisation of state of the art image sensor performance in response to noise.
The experiment outlined in this paper demonstrates how this method can be used to characterise the performance of CMOS image sensors in response to electrical noise on the power supply lines.
- Score: 1.933681537640272
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The applications of automotive cameras in Advanced Driver-Assistance Systems
(ADAS) are growing rapidly as automotive manufacturers strive to provide 360
degree protection for their customers. Vision systems must capture high quality
images in both daytime and night-time scenarios in order to produce the large
informational content required for software analysis in applications such as
lane departure, pedestrian detection and collision detection. The challenge in
producing high quality images in low light scenarios is that the signal to
noise ratio is greatly reduced. This can result in noise becoming the dominant
factor in a captured image thereby making these safety systems less effective
at night. This paper outlines a systematic method for characterisation of state
of the art image sensor performance in response to noise, so as to improve the
design and performance of automotive cameras in low light scenarios. The
experiment outlined in this paper demonstrates how this method can be used to
characterise the performance of CMOS image sensors in response to electrical
noise on the power supply lines.
Related papers
- NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Radar Enlighten the Dark: Enhancing Low-Visibility Perception for
Automated Vehicles with Camera-Radar Fusion [8.946655323517094]
We propose a novel transformer-based 3D object detection model "REDFormer" to tackle low visibility conditions.
Our model outperforms state-of-the-art (SOTA) models on classification and detection accuracy.
arXiv Detail & Related papers (2023-05-27T00:47:39Z) - Windscreen Optical Quality for AI Algorithms: Refractive Power and MTF
not Sufficient [74.2843502619298]
Automotive mass production processes require measurement systems that characterize the optical quality of the windscreens in a meaningful way.
In this article we demonstrate that the main metric established in the industry - refractive power - is fundamentally not capable of capturing relevant optical properties of windscreens.
We propose a novel concept to determine the optical quality of windscreens and to use simulation to link this optical quality to the performance of AI algorithms.
arXiv Detail & Related papers (2023-05-23T20:41:04Z) - A Feature-based Approach for the Recognition of Image Quality
Degradation in Automotive Applications [0.0]
This paper presents a feature-based algorithm to detect certain effects that can degrade image quality in automotive applications.
Experiments with different data sets show that the algorithm can detect soiling adhering to camera lenses and classify different types of image degradation.
arXiv Detail & Related papers (2023-03-13T13:40:09Z) - Camera-Radar Perception for Autonomous Vehicles and ADAS: Concepts,
Datasets and Metrics [77.34726150561087]
This work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles.
Concepts and characteristics related to both sensors, as well as to their fusion, are presented.
We give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.
arXiv Detail & Related papers (2023-03-08T00:48:32Z) - Using simulation to quantify the performance of automotive perception
systems [2.2320512724449233]
We describe the image system simulation software tools that we use to evaluate the performance of image systems for object (automobile) detection.
We quantified system performance by measuring average precision and we report a trend relating system resolution and object detection performance.
arXiv Detail & Related papers (2023-03-02T05:28:35Z) - Are High-Resolution Event Cameras Really Needed? [62.70541164894224]
In low-illumination conditions and at high speeds, low-resolution cameras can outperform high-resolution ones, while requiring a significantly lower bandwidth.
We provide both empirical and theoretical evidence for this claim, which indicates that high-resolution event cameras exhibit higher per-pixel event rates.
In most cases, high-resolution event cameras show a lower task performance, compared to lower resolution sensors in these conditions.
arXiv Detail & Related papers (2022-03-28T12:06:20Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - Burst Imaging for Light-Constrained Structure-From-Motion [4.125187280299246]
We develop an image processing technique for aiding 3D reconstruction from images acquired in low light conditions.
Our technique, based on burst photography, uses direct methods for image registration within bursts of short exposure time images.
Our method is a significant step towards allowing robots to operate in low light conditions, with potential applications to robots operating in environments such as underground mines and night time operation.
arXiv Detail & Related papers (2021-08-23T02:12:40Z) - Impact of Power Supply Noise on Image Sensor Performance in Automotive
Applications [2.28438857884398]
Vision Systems are quickly becoming a large component of Active Automotive Safety Systems.
The challenge in capturing high quality images in low light scenarios is that the signal to noise ratio is greatly reduced.
Research has been undertaken to develop a systematic method of characterising image sensor performance in response to electrical noise.
arXiv Detail & Related papers (2020-11-24T22:25:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.