VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM
- URL: http://arxiv.org/abs/2207.01404v1
- Date: Mon, 4 Jul 2022 13:37:26 GMT
- Title: VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM
- Authors: Ling Gao and Yuxuan Liang and Jiaqi Yang and Shaoxun Wu and Chenyu
Wang and Jiaben Chen and Laurent Kneip
- Abstract summary: Event cameras hold strong potential to complement regular cameras in situations of high dynamics or challenging illumination.
Our contribution is the first complete set of benchmark datasets captured with a multi-sensor setup.
Individual sequences include both small and large-scale environments, and cover the specific challenges targeted by dynamic vision sensors.
- Score: 31.779462222706346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras have recently gained in popularity as they hold strong
potential to complement regular cameras in situations of high dynamics or
challenging illumination. An important problem that may benefit from the
addition of an event camera is given by Simultaneous Localization And Mapping
(SLAM). However, in order to ensure progress on event-inclusive multi-sensor
SLAM, novel benchmark sequences are needed. Our contribution is the first
complete set of benchmark datasets captured with a multi-sensor setup
containing an event-based stereo camera, a regular stereo camera, multiple
depth sensors, and an inertial measurement unit. The setup is fully
hardware-synchronized and underwent accurate extrinsic calibration. All
sequences come with ground truth data captured by highly accurate external
reference devices such as a motion capture system. Individual sequences include
both small and large-scale environments, and cover the specific challenges
targeted by dynamic vision sensors.
Related papers
- DATAP-SfM: Dynamic-Aware Tracking Any Point for Robust Structure from Motion in the Wild [85.03973683867797]
This paper proposes a concise, elegant, and robust pipeline to estimate smooth camera trajectories and obtain dense point clouds for casual videos in the wild.
We show that the proposed method achieves state-of-the-art performance in terms of camera pose estimation even in complex dynamic challenge scenes.
arXiv Detail & Related papers (2024-11-20T13:01:16Z) - EVIT: Event-based Visual-Inertial Tracking in Semi-Dense Maps Using Windowed Nonlinear Optimization [19.915476815328294]
Event cameras are an interesting visual exteroceptive sensor that reacts to brightness changes rather than integrating absolute image intensities.
This paper proposes the addition of inertial signals in order to robustify the estimation.
Our evaluation focuses on a diverse set of real world sequences and comprises a comparison of our proposed method against a purely event-based alternative running at different rates.
arXiv Detail & Related papers (2024-08-02T16:24:55Z) - MTMMC: A Large-Scale Real-World Multi-Modal Camera Tracking Benchmark [63.878793340338035]
Multi-target multi-camera tracking is a crucial task that involves identifying and tracking individuals over time using video streams from multiple cameras.
Existing datasets for this task are either synthetically generated or artificially constructed within a controlled camera network setting.
We present MTMMC, a real-world, large-scale dataset that includes long video sequences captured by 16 multi-modal cameras in two different environments.
arXiv Detail & Related papers (2024-03-29T15:08:37Z) - Temporal-Mapping Photography for Event Cameras [5.344756442054121]
Event cameras, or Dynamic Vision Sensors (DVS), capture brightness changes as a continuous stream of "events"
Converting sparse events to dense intensity frames faithfully has long been an ill-posed problem.
In this paper, for the first time, we realize events to dense intensity image conversion using a stationary event camera in static scenes.
arXiv Detail & Related papers (2024-03-11T05:29:46Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - Asynchronous Multi-View SLAM [78.49842639404413]
Existing multi-camera SLAM systems assume synchronized shutters for all cameras, which is often not the case in practice.
Our framework integrates a continuous-time motion model to relate information across asynchronous multi-frames during tracking, local mapping, and loop closing.
arXiv Detail & Related papers (2021-01-17T00:50:01Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z) - A Multi-spectral Dataset for Evaluating Motion Estimation Systems [7.953825491774407]
This paper presents a novel dataset for evaluating the performance of multi-spectral motion estimation systems.
All the sequences are recorded from a handheld multi-spectral device.
The depth images are captured by a Microsoft Kinect2 and can have benefits for learning cross-modalities stereo matching.
arXiv Detail & Related papers (2020-07-01T17:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.