Raw2Event: Converting Raw Frame Camera into Event Camera
- URL: http://arxiv.org/abs/2509.06767v1
- Date: Mon, 08 Sep 2025 14:53:01 GMT
- Title: Raw2Event: Converting Raw Frame Camera into Event Camera
- Authors: Zijie Ning, Enmin Lin, Sudarshan R. Iyengar, Patrick Vandewalle,
- Abstract summary: Event cameras offer unique advantages such as high temporal resolution, low latency, and high dynamic range.<n>However, their high cost, limited resolution, and lack of features such as autofocus hinder their broad adoption.<n>We present Raw2Event, a complete hardware-software system that enables real-time event generation from low-cost raw frame-based cameras.
- Score: 6.142559094729439
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Event cameras offer unique advantages such as high temporal resolution, low latency, and high dynamic range, making them more and more popular for vision tasks under challenging light conditions. However, their high cost, limited resolution, and lack of features such as autofocus hinder their broad adoption, particularly for early-stage development and prototyping. In this work, we present Raw2Event, a complete hardware-software system that enables real-time event generation from low-cost raw frame-based cameras. By leveraging direct access to raw Bayer data and bypassing traditional image signal processors (ISP), our system is able to utilize the full potential of camera hardware, delivering higher dynamic range, higher resolution, and more faithful output than RGB-based frame-to-event converters. Built upon the DVS-Voltmeter model, Raw2Event features a configurable simulation framework optimized for deployment on embedded platforms. We further design a data acquisition pipeline that supports synchronized recording of raw, RGB, and event streams, facilitating downstream evaluation and dataset creation. Experimental results show that Raw2Event can generate event streams closely resembling those from real event cameras, while benefiting from higher resolution and autofocus capabilities. The system also supports user-intuitive parameter tuning, enabling flexible adaptation to various application requirements. Finally, we deploy the system on a Raspberry Pi for real-time operation, providing a scalable and cost-effective solution for event-based vision research and early-stage system development. The codes are available online: https://anonymous.4open.science/r/raw2event-BFF2/README.md.
Related papers
- EvDiff: High Quality Video with an Event Camera [77.07279880903009]
Reconstructing intensity images from events is a highly ill-posed task due to the inherent ambiguity of absolute brightness.<n>We propose EvDiff, an event-based diffusion model that follows a surrogate training framework to produce high-quality videos.
arXiv Detail & Related papers (2025-11-21T18:49:18Z) - Event-Driven Dynamic Scene Depth Completion [50.01494043834177]
EventDC is the first event-driven depth completion framework.<n>It consists of two key components: Event-Modulated Alignment (EMA) and Local Depth Filtering (LDF)
arXiv Detail & Related papers (2025-05-19T16:02:37Z) - EventSplat: 3D Gaussian Splatting from Moving Event Cameras for Real-time Rendering [7.392798832833857]
Event cameras offer exceptional temporal resolution and a high dynamic range.<n>We introduce a method for using event camera data in novel view synthesis via Gaussian Splatting.
arXiv Detail & Related papers (2024-12-10T08:23:58Z) - E2HQV: High-Quality Video Generation from Event Camera via
Theory-Inspired Model-Aided Deep Learning [53.63364311738552]
Bio-inspired event cameras or dynamic vision sensors are capable of capturing per-pixel brightness changes (called event-streams) in high temporal resolution and high dynamic range.
It calls for events-to-video (E2V) solutions which take event-streams as input and generate high quality video frames for intuitive visualization.
We propose textbfE2HQV, a novel E2V paradigm designed to produce high-quality video frames from events.
arXiv Detail & Related papers (2024-01-16T05:10:50Z) - EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Deformable Convolutions and LSTM-based Flexible Event Frame Fusion
Network for Motion Deblurring [7.187030024676791]
Event cameras differ from conventional RGB cameras in that they produce asynchronous data sequences.
While RGB cameras capture every frame at a fixed rate, event cameras only capture changes in the scene, resulting in sparse and asynchronous data output.
Recent state-of-the-art CNN-based deblurring solutions produce multiple 2-D event frames based on the accumulation of event data over a time period.
It is particularly useful for scenarios in which exposure times vary depending on factors such as lighting conditions or the presence of fast-moving objects in the scene.
arXiv Detail & Related papers (2023-06-01T15:57:12Z) - PyNet-V2 Mobile: Efficient On-Device Photo Processing With Neural
Networks [115.97113917000145]
We propose a novel PyNET-V2 Mobile CNN architecture designed specifically for edge devices.
The proposed architecture is able to process RAW 12MP photos directly on mobile phones under 1.5 second.
We show that the proposed architecture is also compatible with the latest mobile AI accelerators.
arXiv Detail & Related papers (2022-11-08T17:18:01Z) - E$^2$(GO)MOTION: Motion Augmented Event Stream for Egocentric Action
Recognition [21.199869051111367]
Event cameras capture pixel-level intensity changes in the form of "events"
N-EPIC-Kitchens is the first event-based camera extension of the large-scale EPIC-Kitchens dataset.
We show that event data provides a comparable performance to RGB and optical flow, yet without any additional flow computation at deploy time.
arXiv Detail & Related papers (2021-12-07T09:43:08Z) - Learning to Detect Objects with a 1 Megapixel Event Camera [14.949946376335305]
Event cameras encode visual information with high temporal precision, low data-rate, and high-dynamic range.
Due to the novelty of the field, the performance of event-based systems on many vision tasks is still lower compared to conventional frame-based solutions.
arXiv Detail & Related papers (2020-09-28T16:03:59Z) - RGB-D-E: Event Camera Calibration for Fast 6-DOF Object Tracking [16.06615504110132]
We propose to use an event-based camera to increase the speed of 3D object tracking in 6 degrees of freedom.
This application requires handling very high object speed to convey compelling AR experiences.
We develop a deep learning approach, which combines an existing RGB-D network along with a novel event-based network in a cascade fashion.
arXiv Detail & Related papers (2020-06-09T01:55:48Z) - EventSR: From Asynchronous Events to Image Reconstruction, Restoration,
and Super-Resolution via End-to-End Adversarial Learning [75.17497166510083]
Event cameras sense intensity changes and have many advantages over conventional cameras.
Some methods have been proposed to reconstruct intensity images from event streams.
The outputs are still in low resolution (LR), noisy, and unrealistic.
We propose a novel end-to-end pipeline that reconstructs LR images from event streams, enhances the image qualities and upsamples the enhanced images, called EventSR.
arXiv Detail & Related papers (2020-03-17T10:58:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.