Motion Deblurring with Real Events
- URL: http://arxiv.org/abs/2109.13695v1
- Date: Tue, 28 Sep 2021 13:11:44 GMT
- Title: Motion Deblurring with Real Events
- Authors: Fang Xu and Lei Yu and Bishan Wang and Wen Yang and Gui-Song Xia and
Xu Jia and Zhendong Qiao and Jianzhuang Liu
- Abstract summary: We propose an end-to-end learning framework for event-based motion deblurring in a self-supervised manner.
Real-world events are exploited to alleviate the performance degradation caused by data inconsistency.
- Score: 50.441934496692376
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an end-to-end learning framework for event-based
motion deblurring in a self-supervised manner, where real-world events are
exploited to alleviate the performance degradation caused by data
inconsistency. To achieve this end, optical flows are predicted from events,
with which the blurry consistency and photometric consistency are exploited to
enable self-supervision on the deblurring network with real-world data.
Furthermore, a piece-wise linear motion model is proposed to take into account
motion non-linearities and thus leads to an accurate model for the physical
formation of motion blurs in the real-world scenario. Extensive evaluation on
both synthetic and real motion blur datasets demonstrates that the proposed
algorithm bridges the gap between simulated and real-world motion blurs and
shows remarkable performance for event-based motion deblurring in real-world
scenarios.
Related papers
- Motion-prior Contrast Maximization for Dense Continuous-Time Motion Estimation [34.529280562470746]
We introduce a novel self-supervised loss combining the Contrast Maximization framework with a non-linear motion prior in the form of pixel-level trajectories.
Their effectiveness is demonstrated in two scenarios: In dense continuous-time motion estimation, our method improves the zero-shot performance of a synthetically trained model by 29%.
arXiv Detail & Related papers (2024-07-15T15:18:28Z) - Motion-aware Latent Diffusion Models for Video Frame Interpolation [51.78737270917301]
Motion estimation between neighboring frames plays a crucial role in avoiding motion ambiguity.
We propose a novel diffusion framework, motion-aware latent diffusion models (MADiff)
Our method achieves state-of-the-art performance significantly outperforming existing approaches.
arXiv Detail & Related papers (2024-04-21T05:09:56Z) - TrackDiffusion: Tracklet-Conditioned Video Generation via Diffusion Models [75.20168902300166]
We propose TrackDiffusion, a novel video generation framework affording fine-grained trajectory-conditioned motion control.
A pivotal component of TrackDiffusion is the instance enhancer, which explicitly ensures inter-frame consistency of multiple objects.
generated video sequences by our TrackDiffusion can be used as training data for visual perception models.
arXiv Detail & Related papers (2023-12-01T15:24:38Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - A Diffusion-Model of Joint Interactive Navigation [14.689298253430568]
We present DJINN - a diffusion based method of generating traffic scenarios.
Our approach jointly diffuses the trajectories of all agents, conditioned on a flexible set of state observations from the past, present, or future.
We show how DJINN flexibly enables direct test-time sampling from a variety of valuable conditional distributions.
arXiv Detail & Related papers (2023-09-21T22:10:20Z) - Generalizing Event-Based Motion Deblurring in Real-World Scenarios [62.995994797897424]
Event-based motion deblurring has shown promising results by exploiting low-latency events.
We propose a scale-aware network that allows flexible input spatial scales and enables learning from different temporal scales of motion blur.
A two-stage self-supervised learning scheme is then developed to fit real-world data distribution.
arXiv Detail & Related papers (2023-08-11T04:27:29Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.