Event Camera Simulator Design for Modeling Attention-based Inference
Architectures
- URL: http://arxiv.org/abs/2105.01203v1
- Date: Mon, 3 May 2021 22:41:45 GMT
- Title: Event Camera Simulator Design for Modeling Attention-based Inference
Architectures
- Authors: Md Jubaer Hossain Pantho, Joel Mandebi Mbongue, Pankaj Bhowmik,
Christophe Bobda
- Abstract summary: This paper presents an event camera simulator that can be a potent tool for hardware design prototyping.
The proposed simulator implements a distributed computation model to identify relevant regions in an image frame.
Our experimental results show that the simulator can effectively emulate event vision with low overheads.
- Score: 4.409836695738517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, there has been a growing interest in realizing methodologies
to integrate more and more computation at the level of the image sensor. The
rising trend has seen an increased research interest in developing novel event
cameras that can facilitate CNN computation directly in the sensor. However,
event-based cameras are not generally available in the market, limiting
performance exploration on high-level models and algorithms. This paper
presents an event camera simulator that can be a potent tool for hardware
design prototyping, parameter optimization, attention-based innovative
algorithm development, and benchmarking. The proposed simulator implements a
distributed computation model to identify relevant regions in an image frame.
Our simulator's relevance computation model is realized as a collection of
modules and performs computations in parallel. The distributed computation
model is configurable, making it highly useful for design space exploration.
The Rendering engine of the simulator samples frame-regions only when there is
a new event. The simulator closely emulates an image processing pipeline
similar to that of physical cameras. Our experimental results show that the
simulator can effectively emulate event vision with low overheads.
Related papers
- Event Quality Score (EQS): Assessing the Realism of Simulated Event Camera Streams via Distances in Latent Space [20.537672896807063]
Event cameras promise a paradigm shift in vision sensing with their low latency, high dynamic range, and asynchronous nature of events.
We introduce event quality score (EQS), a quality metric that utilizes activations of the RVT architecture.
arXiv Detail & Related papers (2025-04-16T22:25:57Z) - ADV2E: Bridging the Gap Between Analogue Circuit and Discrete Frames in the Video-to-Events Simulator [6.783044920569469]
Event cameras operate fundamentally differently from traditional Active Pixel Sensor (APS) cameras, offering significant advantages.
Recent research has developed simulators to convert video frames into events, addressing the shortage of real event datasets.
We propose a novel method of generating reliable event data based on a detailed analysis of the pixel circuitry in event cameras.
arXiv Detail & Related papers (2024-11-19T05:52:51Z) - GarchingSim: An Autonomous Driving Simulator with Photorealistic Scenes
and Minimalist Workflow [24.789118651720045]
We introduce an autonomous driving simulator with photorealistic scenes.
The simulator is able to communicate with external algorithms through ROS2 or Socket.IO.
We implement a highly accurate vehicle dynamics model within the simulator to enhance the realism of the vehicle's physical effects.
arXiv Detail & Related papers (2024-01-28T23:26:15Z) - Informal Safety Guarantees for Simulated Optimizers Through
Extrapolation from Partial Simulations [0.0]
Self-supervised learning is the backbone of state of the art language modeling.
It has been argued that training with predictive loss on a self-supervised dataset causes simulators.
arXiv Detail & Related papers (2023-11-29T09:32:56Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - On the Generation of a Synthetic Event-Based Vision Dataset for
Navigation and Landing [69.34740063574921]
This paper presents a methodology for generating event-based vision datasets from optimal landing trajectories.
We construct sequences of photorealistic images of the lunar surface with the Planet and Asteroid Natural Scene Generation Utility.
We demonstrate that the pipeline can generate realistic event-based representations of surface features by constructing a dataset of 500 trajectories.
arXiv Detail & Related papers (2023-08-01T09:14:20Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Real-time event simulation with frame-based cameras [13.045658279006524]
Event simulators minimize the need for real event cameras to develop novel algorithms.
This work proposes simulation methods that improve the performance of event simulation by two orders of magnitude.
arXiv Detail & Related papers (2022-09-10T10:35:53Z) - Deep Learning for Real Time Satellite Pose Estimation on Low Power Edge
TPU [58.720142291102135]
In this paper we propose a pose estimation software exploiting neural network architectures.
We show how low power machine learning accelerators could enable Artificial Intelligence exploitation in space.
arXiv Detail & Related papers (2022-04-07T08:53:18Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Enhanced Frame and Event-Based Simulator and Event-Based Video
Interpolation Network [1.4095425725284465]
We present a new, advanced event simulator that can produce realistic scenes recorded by a camera rig with an arbitrary number of sensors located at fixed offsets.
It includes a new frame-based image sensor model with realistic image quality reduction effects, and an extended DVS model with more accurate characteristics.
We show that data generated by our simulator can be used to train our new model, leading to reconstructed images on public datasets of equivalent or better quality than the state of the art.
arXiv Detail & Related papers (2021-12-17T08:27:13Z) - SimAug: Learning Robust Representations from Simulation for Trajectory
Prediction [78.91518036949918]
We propose a novel approach to learn robust representation through augmenting the simulation training data.
We show that SimAug achieves promising results on three real-world benchmarks using zero real training data.
arXiv Detail & Related papers (2020-04-04T21:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.