Control and Evaluation of Event Cameras Output Sharpness via Bias
- URL: http://arxiv.org/abs/2210.13929v1
- Date: Tue, 25 Oct 2022 11:31:37 GMT
- Title: Control and Evaluation of Event Cameras Output Sharpness via Bias
- Authors: Mehdi Sefidgar Dilmaghani, Waseem Shariff, Cian Ryan, Joe Lemley,
Peter Corcoran
- Abstract summary: Event cameras also known as neuromorphic sensors are relatively a new technology with some privilege over the RGB cameras.
Five different bias settings are explained and the effect of their change in the event output is surveyed and analyzed.
- Score: 1.854931308524932
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Event cameras also known as neuromorphic sensors are relatively a new
technology with some privilege over the RGB cameras. The most important one is
their difference in capturing the light changes in the environment, each pixel
changes independently from the others when it captures a change in the
environment light. To increase the users degree of freedom in controlling the
output of these cameras, such as changing the sensitivity of the sensor to
light changes, controlling the number of generated events and other similar
operations, the camera manufacturers usually introduce some tools to make
sensor level changes in camera settings. The contribution of this research is
to examine and document the effects of changing the sensor settings on the
sharpness as an indicator of quality of the generated stream of event data. To
have a qualitative understanding this stream of event is converted to frames,
then the average image gradient magnitude as an index of the number of edges
and accordingly sharpness is calculated for these frames. Five different bias
settings are explained and the effect of their change in the event output is
surveyed and analyzed. In addition, the operation of the event camera sensing
array is explained with an analogue circuit model and the functions of the bias
foundations are linked with this model.
Related papers
- MSSIDD: A Benchmark for Multi-Sensor Denoising [55.41612200877861]
We introduce a new benchmark, the Multi-Sensor SIDD dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models.
We propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features.
arXiv Detail & Related papers (2024-11-18T13:32:59Z) - Gradient events: improved acquisition of visual information in event cameras [0.0]
We propose a new type of event, the gradient event, which benefits from the same properties as a conventional brightness event.
We show that the gradient event -based video reconstruction outperforms existing state-of-the-art brightness event -based methods by a significant margin.
arXiv Detail & Related papers (2024-09-03T10:18:35Z) - Unexplored Faces of Robustness and Out-of-Distribution: Covariate Shifts in Environment and Sensor Domains [2.4572304328659595]
We introduce a new distribution shift dataset, ImageNet-ES.
We evaluate out-of-distribution (OOD) detection and model robustness.
Our results suggest that effective shift mitigation via camera sensor control can significantly improve performance without increasing model size.
arXiv Detail & Related papers (2024-04-24T13:59:19Z) - Make Explicit Calibration Implicit: Calibrate Denoiser Instead of the
Noise Model [83.9497193551511]
We introduce Lighting Every Darkness (LED), which is effective regardless of the digital gain or the camera sensor.
LED eliminates the need for explicit noise model calibration, instead utilizing an implicit fine-tuning process that allows quick deployment and requires minimal data.
LED also allows researchers to focus more on deep learning advancements while still utilizing sensor engineering benefits.
arXiv Detail & Related papers (2023-08-07T10:09:11Z) - E-Calib: A Fast, Robust and Accurate Calibration Toolbox for Event Cameras [18.54225086007182]
We present E-Calib, a novel, fast, robust, and accurate calibration toolbox for event cameras.
The proposed method is tested in a variety of rigorous experiments for different event camera models.
arXiv Detail & Related papers (2023-06-15T12:16:38Z) - Learning Transformations To Reduce the Geometric Shift in Object
Detection [60.20931827772482]
We tackle geometric shifts emerging from variations in the image capture process.
We introduce a self-training approach that learns a set of geometric transformations to minimize these shifts.
We evaluate our method on two different shifts, i.e., a camera's field of view (FoV) change and a viewpoint change.
arXiv Detail & Related papers (2023-01-13T11:55:30Z) - Lasers to Events: Automatic Extrinsic Calibration of Lidars and Event
Cameras [67.84498757689776]
This paper presents the first direct calibration method between event cameras and lidars.
It removes dependencies on frame-based camera intermediaries and/or highly-accurate hand measurements.
arXiv Detail & Related papers (2022-07-03T11:05:45Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Moving Object Detection for Event-based vision using Graph Spectral
Clustering [6.354824287948164]
Moving object detection has been a central topic of discussion in computer vision for its wide range of applications.
We present an unsupervised Graph Spectral Clustering technique for Moving Object Detection in Event-based data.
We additionally show how the optimum number of moving objects can be automatically determined.
arXiv Detail & Related papers (2021-09-30T10:19:22Z) - Moving Object Detection for Event-based Vision using k-means Clustering [0.0]
Moving object detection is a crucial task in computer vision.
Event-based cameras are bio-inspired cameras that work by mimicking the working of the human eye.
In this paper, we investigate the application of the k-means clustering technique in detecting moving objects in event-based data.
arXiv Detail & Related papers (2021-09-04T14:43:14Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.