Event Quality Score (EQS): Assessing the Realism of Simulated Event Camera Streams via Distances in Latent Space
- URL: http://arxiv.org/abs/2504.12515v2
- Date: Mon, 21 Apr 2025 01:04:58 GMT
- Title: Event Quality Score (EQS): Assessing the Realism of Simulated Event Camera Streams via Distances in Latent Space
- Authors: Kaustav Chanda, Aayush Atul Verma, Arpitsinh Vaghela, Yezhou Yang, Bharatesh Chakravarthi,
- Abstract summary: Event cameras promise a paradigm shift in vision sensing with their low latency, high dynamic range, and asynchronous nature of events.<n>We introduce event quality score (EQS), a quality metric that utilizes activations of the RVT architecture.
- Score: 20.537672896807063
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event cameras promise a paradigm shift in vision sensing with their low latency, high dynamic range, and asynchronous nature of events. Unfortunately, the scarcity of high-quality labeled datasets hinders their widespread adoption in deep learning-driven computer vision. To mitigate this, several simulators have been proposed to generate synthetic event data for training models for detection and estimation tasks. However, the fundamentally different sensor design of event cameras compared to traditional frame-based cameras poses a challenge for accurate simulation. As a result, most simulated data fail to mimic data captured by real event cameras. Inspired by existing work on using deep features for image comparison, we introduce event quality score (EQS), a quality metric that utilizes activations of the RVT architecture. Through sim-to-real experiments on the DSEC driving dataset, it is shown that a higher EQS implies improved generalization to real-world data after training on simulated events. Thus, optimizing for EQS can lead to developing more realistic event camera simulators, effectively reducing the simulation gap. EQS is available at https://github.com/eventbasedvision/EQS.
Related papers
- A PyTorch-Enabled Tool for Synthetic Event Camera Data Generation and Algorithm Development [0.3875852578999189]
We introduce Synthetic Events for Neural Processing and Integration (SENPI) in Python, a PyTorch-based library for simulating and processing event camera data.<n>SENPI includes a differentiable digital twin that converts intensity-based data into event representations, allowing for evaluation of event camera performance.<n>We demonstrate SENPI's ability to produce realistic event-based data by comparing synthetic outputs to real event camera data and use these results to draw conclusions on the properties and utility of event-based perception.
arXiv Detail & Related papers (2025-03-12T18:55:52Z) - ADV2E: Bridging the Gap Between Analogue Circuit and Discrete Frames in the Video-to-Events Simulator [6.783044920569469]
Event cameras operate fundamentally differently from traditional Active Pixel Sensor (APS) cameras, offering significant advantages.
Recent research has developed simulators to convert video frames into events, addressing the shortage of real event datasets.
We propose a novel method of generating reliable event data based on a detailed analysis of the pixel circuitry in event cameras.
arXiv Detail & Related papers (2024-11-19T05:52:51Z) - Evaluating Image-Based Face and Eye Tracking with Event Cameras [9.677797822200965]
Event Cameras, also known as Neuromorphic sensors, capture changes in local light intensity at the pixel level, producing asynchronously generated data termed events''
This data format mitigates common issues observed in conventional cameras, like under-sampling when capturing fast-moving objects.
We evaluate the viability of integrating conventional algorithms with event-based data, transformed into a frame format.
arXiv Detail & Related papers (2024-08-19T20:27:08Z) - A Novel Spike Transformer Network for Depth Estimation from Event Cameras via Cross-modality Knowledge Distillation [3.355813093377501]
Event cameras encode temporal changes in light intensity as asynchronous binary spikes.<n>Their unconventional spiking output and the scarcity of labelled datasets pose significant challenges to traditional image-based depth estimation methods.<n>We propose a novel energy-efficient Spike-Driven Transformer Network (SDT) for depth estimation, leveraging the unique properties of spiking data.
arXiv Detail & Related papers (2024-04-26T11:32:53Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - BlinkFlow: A Dataset to Push the Limits of Event-based Optical Flow Estimation [76.66876888943385]
Event cameras provide high temporal precision, low data rates, and high dynamic range visual perception.
We present a novel simulator, BlinkSim, for the fast generation of large-scale data for event-based optical flow.
arXiv Detail & Related papers (2023-03-14T09:03:54Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Event Camera Simulator Design for Modeling Attention-based Inference
Architectures [4.409836695738517]
This paper presents an event camera simulator that can be a potent tool for hardware design prototyping.
The proposed simulator implements a distributed computation model to identify relevant regions in an image frame.
Our experimental results show that the simulator can effectively emulate event vision with low overheads.
arXiv Detail & Related papers (2021-05-03T22:41:45Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Learning to Detect Objects with a 1 Megapixel Event Camera [14.949946376335305]
Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range.
Due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions.
arXiv Detail & Related papers (2020-09-28T16:03:59Z) - SimAug: Learning Robust Representations from Simulation for Trajectory
Prediction [78.91518036949918]
We propose a novel approach to learn robust representation through augmenting the simulation training data.
We show that SimAug achieves promising results on three real-world benchmarks using zero real training data.
arXiv Detail & Related papers (2020-04-04T21:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.