Learning to Deblur and Generate High Frame Rate Video with an Event
Camera
- URL: http://arxiv.org/abs/2003.00847v2
- Date: Fri, 20 Mar 2020 04:09:55 GMT
- Title: Learning to Deblur and Generate High Frame Rate Video with an Event
Camera
- Authors: Chen Haoyu, Teng Minggui, Shi Boxin, Wang YIzhou and Huang Tiejun
- Abstract summary: Event cameras do not suffer from motion blur when recording high-speed scenes.
We formulate the deblurring task on traditional cameras directed by events to be a residual learning one.
We propose corresponding network architectures for effective learning of deblurring and high frame rate video generation tasks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are bio-inspired cameras which can measure the change of
intensity asynchronously with high temporal resolution. One of the event
cameras' advantages is that they do not suffer from motion blur when recording
high-speed scenes. In this paper, we formulate the deblurring task on
traditional cameras directed by events to be a residual learning one, and we
propose corresponding network architectures for effective learning of
deblurring and high frame rate video generation tasks. We first train a
modified U-Net network to restore a sharp image from a blurry image using
corresponding events. Then we train another similar network with different
downsampling blocks to generate high frame rate video using the restored sharp
image and events. Experiment results show that our method can restore sharper
images and videos than state-of-the-art methods.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - Gradient events: improved acquisition of visual information in event cameras [0.0]
We propose a new type of event, the gradient event, which benefits from the same properties as a conventional brightness event.
We show that the gradient event -based video reconstruction outperforms existing state-of-the-art brightness event -based methods by a significant margin.
arXiv Detail & Related papers (2024-09-03T10:18:35Z) - CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms
with Real-captured Hybrid Dataset [55.12137324648253]
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed.
This paper focuses on five event-aided image and video enhancement tasks.
arXiv Detail & Related papers (2023-12-13T15:42:04Z) - Aggregating Long-term Sharp Features via Hybrid Transformers for Video
Deblurring [76.54162653678871]
We propose a video deblurring method that leverages both neighboring frames and present sharp frames using hybrid Transformers for feature aggregation.
Our proposed method outperforms state-of-the-art video deblurring methods as well as event-driven video deblurring methods in terms of quantitative metrics and visual quality.
arXiv Detail & Related papers (2023-09-13T16:12:11Z) - Unfolding a blurred image [36.519356428362286]
We learn motion representation from sharp videos in an unsupervised manner.
We then train a convolutional recurrent video autoencoder network that performs a surrogate task of video reconstruction.
It is employed for guided training of a motion encoder for blurred images.
This network extracts embedded motion information from the blurred image to generate a sharp video in conjunction with the trained recurrent video decoder.
arXiv Detail & Related papers (2022-01-28T09:39:55Z) - MEFNet: Multi-scale Event Fusion Network for Motion Deblurring [62.60878284671317]
Traditional frame-based cameras inevitably suffer from motion blur due to long exposure times.
As a kind of bio-inspired camera, the event camera records the intensity changes in an asynchronous way with high temporal resolution.
In this paper, we rethink the event-based image deblurring problem and unfold it into an end-to-end two-stage image restoration network.
arXiv Detail & Related papers (2021-11-30T23:18:35Z) - Restoration of Video Frames from a Single Blurred Image with Motion
Understanding [69.90724075337194]
We propose a novel framework to generate clean video frames from a single motion-red image.
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors.
Our framework is based on anblur-decoder structure with spatial transformer network modules.
arXiv Detail & Related papers (2021-04-19T08:32:57Z) - Reducing the Sim-to-Real Gap for Event Cameras [64.89183456212069]
Event cameras are paradigm-shifting novel sensors that report asynchronous, per-pixel brightness changes called 'events' with unparalleled low latency.
Recent work has demonstrated impressive results using Convolutional Neural Networks (CNNs) for video reconstruction and optic flow with events.
We present strategies for improving training data for event based CNNs that result in 20-40% boost in performance of existing video reconstruction networks.
arXiv Detail & Related papers (2020-03-20T02:44:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.