VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM
- URL: http://arxiv.org/abs/2207.01404v1
- Date: Mon, 4 Jul 2022 13:37:26 GMT
- Title: VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM
- Authors: Ling Gao and Yuxuan Liang and Jiaqi Yang and Shaoxun Wu and Chenyu
Wang and Jiaben Chen and Laurent Kneip
- Abstract summary: Event cameras hold strong potential to complement regular cameras in situations of high dynamics or challenging illumination.
Our contribution is the first complete set of benchmark datasets captured with a multi-sensor setup.
Individual sequences include both small and large-scale environments, and cover the specific challenges targeted by dynamic vision sensors.
- Score: 31.779462222706346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras have recently gained in popularity as they hold strong
potential to complement regular cameras in situations of high dynamics or
challenging illumination. An important problem that may benefit from the
addition of an event camera is given by Simultaneous Localization And Mapping
(SLAM). However, in order to ensure progress on event-inclusive multi-sensor
SLAM, novel benchmark sequences are needed. Our contribution is the first
complete set of benchmark datasets captured with a multi-sensor setup
containing an event-based stereo camera, a regular stereo camera, multiple
depth sensors, and an inertial measurement unit. The setup is fully
hardware-synchronized and underwent accurate extrinsic calibration. All
sequences come with ground truth data captured by highly accurate external
reference devices such as a motion capture system. Individual sequences include
both small and large-scale environments, and cover the specific challenges
targeted by dynamic vision sensors.
Related papers
- MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark [63.878793340338035]
Multi-target multi-camera tracking is a crucial task that involves identifying and tracking individuals over time using video streams from multiple cameras.
Existing datasets for this task are either synthetically generated or artificially constructed within a controlled camera network setting.
We present MTMMC, a real-world, large-scale dataset that includes long video sequences captured by 16 multi-modal cameras in two different environments.
arXiv Detail & Related papers (2024-03-29T15:08:37Z) - Temporal-Mapping Photography for Event Cameras [5.838762448259289]
Event cameras capture brightness changes as a continuous stream of events'' rather than traditional intensity frames.
We realize events to dense intensity image conversion using a stationary event camera in static scenes.
arXiv Detail & Related papers (2024-03-11T05:29:46Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - Video Frame Interpolation with Stereo Event and Intensity Camera [40.07341828127157]
We propose a novel Stereo Event-based VFI network (SE-VFI-Net) to generate high-quality intermediate frames.
We exploit the fused features accomplishing accurate optical flow and disparity estimation.
Our proposed SEVFI-Net outperforms state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-07-17T04:02:00Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - Asynchronous Multi-View SLAM [78.49842639404413]
Existing multi-camera SLAM systems assume synchronized shutters for all cameras, which is often not the case in practice.
Our framework integrates a continuous-time motion model to relate information across asynchronous multi-frames during tracking, local mapping, and loop closing.
arXiv Detail & Related papers (2021-01-17T00:50:01Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z) - A Multi-spectral Dataset for Evaluating Motion Estimation Systems [7.953825491774407]
This paper presents a novel dataset for evaluating the performance of multi-spectral motion estimation systems.
All the sequences are recorded from a handheld multi-spectral device.
The depth images are captured by a Microsoft Kinect2 and can have benefits for learning cross-modalities stereo matching.
arXiv Detail & Related papers (2020-07-01T17:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.