TUMTraf Event: Calibration and Fusion Resulting in a Dataset for
Roadside Event-Based and RGB Cameras
- URL: http://arxiv.org/abs/2401.08474v2
- Date: Sat, 9 Mar 2024 19:29:12 GMT
- Title: TUMTraf Event: Calibration and Fusion Resulting in a Dataset for
Roadside Event-Based and RGB Cameras
- Authors: Christian Cre{\ss}, Walter Zimmer, Nils Purschke, Bach Ngoc Doan, Sven
Kirchner, Venkatnarayanan Lakshminarasimhan, Leah Strand, Alois C. Knoll
- Abstract summary: Event-based cameras are predestined for Intelligent Transportation Systems (ITS)
They provide very high temporal resolution and dynamic range, which can eliminate motion blur and improve detection performance at night.
However, event-based images lack color and texture compared to images from a conventional RGB camera.
- Score: 14.57694345706197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event-based cameras are predestined for Intelligent Transportation Systems
(ITS). They provide very high temporal resolution and dynamic range, which can
eliminate motion blur and improve detection performance at night. However,
event-based images lack color and texture compared to images from a
conventional RGB camera. Considering that, data fusion between event-based and
conventional cameras can combine the strengths of both modalities. For this
purpose, extrinsic calibration is necessary. To the best of our knowledge, no
targetless calibration between event-based and RGB cameras can handle multiple
moving objects, nor does data fusion optimized for the domain of roadside ITS
exist. Furthermore, synchronized event-based and RGB camera datasets
considering roadside perspective are not yet published. To fill these research
gaps, based on our previous work, we extended our targetless calibration
approach with clustering methods to handle multiple moving objects.
Furthermore, we developed an early fusion, simple late fusion, and a novel
spatiotemporal late fusion method. Lastly, we published the TUMTraf Event
Dataset, which contains more than 4,111 synchronized event-based and RGB images
with 50,496 labeled 2D boxes. During our extensive experiments, we verified the
effectiveness of our calibration method with multiple moving objects.
Furthermore, compared to a single RGB camera, we increased the detection
performance of up to +9 % mAP in the day and up to +13 % mAP during the
challenging night with our presented event-based sensor fusion methods. The
TUMTraf Event Dataset is available at
https://innovation-mobility.com/tumtraf-dataset.
Related papers
- TENet: Targetness Entanglement Incorporating with Multi-Scale Pooling and Mutually-Guided Fusion for RGB-E Object Tracking [30.89375068036783]
Existing approaches perform event feature extraction for RGB-E tracking using traditional appearance models.
We propose an Event backbone (Pooler) to obtain a high-quality feature representation that is cognisant of the intrinsic characteristics of the event data.
Our method significantly outperforms state-of-the-art trackers on two widely used RGB-E tracking datasets.
arXiv Detail & Related papers (2024-05-08T12:19:08Z) - Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction [51.87279764576998]
We propose EvRGBHand -- the first approach for 3D hand mesh reconstruction with an event camera and an RGB camera compensating for each other.
EvRGBHand can tackle overexposure and motion blur issues in RGB-based HMR and foreground scarcity and background overflow issues in event-based HMR.
arXiv Detail & Related papers (2024-03-12T06:04:50Z) - CRSOT: Cross-Resolution Object Tracking using Unaligned Frame and Event
Cameras [43.699819213559515]
Existing datasets for RGB-DVS tracking are collected with DVS346 camera and their resolution ($346 times 260$) is low for practical applications.
We build the first unaligned frame-event dataset CRSOT collected with a specially built data acquisition system.
We propose a novel unaligned object tracking framework that can realize robust tracking even using the loosely aligned RGB-Event data.
arXiv Detail & Related papers (2024-01-05T14:20:22Z) - EvPlug: Learn a Plug-and-Play Module for Event and Image Fusion [55.367269556557645]
EvPlug learns a plug-and-play event and image fusion module from the supervision of the existing RGB-based model.
We demonstrate the superiority of EvPlug in several vision tasks such as object detection, semantic segmentation, and 3D hand pose estimation.
arXiv Detail & Related papers (2023-12-28T10:05:13Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera [8.673063170884591]
EOLO is a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities.
Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events.
arXiv Detail & Related papers (2023-09-17T15:14:01Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - RGB-Event Fusion for Moving Object Detection in Autonomous Driving [3.5397758597664306]
Moving Object Detection (MOD) is a critical vision task for successfully achieving safe autonomous driving.
Recent advances in sensor technologies, especially the Event camera, can naturally complement the conventional camera approach to better model moving objects.
We propose RENet, a novel RGB-Event fusion Network, that jointly exploits the two complementary modalities to achieve more robust MOD.
arXiv Detail & Related papers (2022-09-17T12:59:08Z) - E$^2$(GO)MOTION: Motion Augmented Event Stream for Egocentric Action
Recognition [21.199869051111367]
Event cameras capture pixel-level intensity changes in the form of "events"
N-EPIC-Kitchens is the first event-based camera extension of the large-scale EPIC-Kitchens dataset.
We show that event data provides a comparable performance to RGB and optical flow, yet without any additional flow computation at deploy time.
arXiv Detail & Related papers (2021-12-07T09:43:08Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.