BiasBench: A reproducible benchmark for tuning the biases of event cameras
- URL: http://arxiv.org/abs/2504.18235v1
- Date: Fri, 25 Apr 2025 10:33:24 GMT
- Title: BiasBench: A reproducible benchmark for tuning the biases of event cameras
- Authors: Andreas Ziegler, David Joseph, Thomas Gossard, Emil Moldovan, Andreas Zell,
- Abstract summary: Event-based cameras are bio-inspired sensors that detect light changes asynchronously for each pixel.<n>They are increasingly used in fields like computer vision and robotics because of several advantages over traditional frame-based cameras.<n>As with any camera, the output's quality depends on how well the camera's settings, called biases for event-based cameras, are configured.
- Score: 10.401271236186794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event-based cameras are bio-inspired sensors that detect light changes asynchronously for each pixel. They are increasingly used in fields like computer vision and robotics because of several advantages over traditional frame-based cameras, such as high temporal resolution, low latency, and high dynamic range. As with any camera, the output's quality depends on how well the camera's settings, called biases for event-based cameras, are configured. While frame-based cameras have advanced automatic configuration algorithms, there are very few such tools for tuning these biases. A systematic testing framework would require observing the same scene with different biases, which is tricky since event cameras only generate events when there is movement. Event simulators exist, but since biases heavily depend on the electrical circuit and the pixel design, available simulators are not well suited for bias tuning. To allow reproducibility, we present BiasBench, a novel event dataset containing multiple scenes with settings sampled in a grid-like pattern. We present three different scenes, each with a quality metric of the downstream application. Additionally, we present a novel, RL-based method to facilitate online bias adjustments.
Related papers
- Deep Event Visual Odometry [40.57142632274148]
Event cameras offer the exciting possibility of tracking the camera's pose during high-speed motion.
Existing event-based monocular visual odometry approaches demonstrate limited performance on recent benchmarks.
We present Deep Event VO (DEVO), the first monocular event-only system with strong performance on a large number of real-world benchmarks.
arXiv Detail & Related papers (2023-12-15T14:00:00Z) - E-Calib: A Fast, Robust and Accurate Calibration Toolbox for Event Cameras [18.54225086007182]
We present E-Calib, a novel, fast, robust, and accurate calibration toolbox for event cameras.
The proposed method is tested in a variety of rigorous experiments for different event camera models.
arXiv Detail & Related papers (2023-06-15T12:16:38Z) - Temporal and Contextual Transformer for Multi-Camera Editing of TV Shows [83.54243912535667]
We first collect a novel benchmark on this setting with four diverse scenarios including concerts, sports games, gala shows, and contests.
It contains 88-hour raw videos that contribute to the 14-hour edited videos.
We propose a new approach temporal and contextual transformer that utilizes clues from historical shots and other views to make shot transition decisions.
arXiv Detail & Related papers (2022-10-17T04:11:23Z) - PL-EVIO: Robust Monocular Event-based Visual Inertial Odometry with
Point and Line Features [3.6355269783970394]
Event cameras are motion-activated sensors that capture pixel-level illumination changes instead of the intensity image with a fixed frame rate.
We propose a robust, high-accurate, and real-time optimization-based monocular event-based visual-inertial odometry (VIO) method.
arXiv Detail & Related papers (2022-09-25T06:14:12Z) - Automatic Camera Control and Directing with an Ultra-High-Definition
Collaborative Recording System [0.5735035463793007]
Capturing an event from multiple camera angles can give a viewer the most complete and interesting picture of that event.
The introduction of omnidirectional or wide-angle cameras has allowed for events to be captured more completely.
A system is presented that, given multiple ultra-high resolution video streams of an event, can generate a visually pleasing sequence of shots.
arXiv Detail & Related papers (2022-08-10T08:28:08Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - How to Calibrate Your Event Camera [58.80418612800161]
We propose a generic event camera calibration framework using image reconstruction.
We show that neural-network-based image reconstruction is well suited for the task of intrinsic and extrinsic calibration of event cameras.
arXiv Detail & Related papers (2021-05-26T07:06:58Z) - Combining Events and Frames using Recurrent Asynchronous Multimodal
Networks for Monocular Depth Prediction [51.072733683919246]
We introduce Recurrent Asynchronous Multimodal (RAM) networks to handle asynchronous and irregular data from multiple sensors.
Inspired by traditional RNNs, RAM networks maintain a hidden state that is updated asynchronously and can be queried at any time to generate a prediction.
We show an improvement over state-of-the-art methods by up to 30% in terms of mean depth absolute error.
arXiv Detail & Related papers (2021-02-18T13:24:35Z) - EventHands: Real-Time Neural 3D Hand Reconstruction from an Event Stream [80.15360180192175]
3D hand pose estimation from monocular videos is a long-standing and challenging problem.
We address it for the first time using a single event camera, i.e., an asynchronous vision sensor reacting on brightness changes.
Our approach has characteristics previously not demonstrated with a single RGB or depth camera.
arXiv Detail & Related papers (2020-12-11T16:45:34Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.