Prior-enlightened and Motion-robust Video Deblurring
- URL: http://arxiv.org/abs/2003.11209v2
- Date: Thu, 26 Mar 2020 02:30:40 GMT
- Title: Prior-enlightened and Motion-robust Video Deblurring
- Authors: Ya Zhou, Jianfeng Xu, Kazuyuki Tasaka, Zhibo Chen, Weiping Li
- Abstract summary: PRiOr-enlightened and MOTION-robust deblurring model (PROMOTION) suitable for challenging blurs.
We use 3D group convolution to efficiently encode heterogeneous prior information.
We also design the priors representing blur distribution, to better handle non-uniform blur-temporal domain.
- Score: 29.158836861982742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various blur distortions in video will cause negative impact on both human
viewing and video-based applications, which makes motion-robust deblurring
methods urgently needed. Most existing works have strong dataset dependency and
limited generalization ability in handling challenging scenarios, like blur in
low contrast or severe motion areas, and non-uniform blur. Therefore, we
propose a PRiOr-enlightened and MOTION-robust video deblurring model
(PROMOTION) suitable for challenging blurs. On the one hand, we use 3D group
convolution to efficiently encode heterogeneous prior information, explicitly
enhancing the scenes' perception while mitigating the output's artifacts. On
the other hand, we design the priors representing blur distribution, to better
handle non-uniform blur in spatio-temporal domain. Besides the classical camera
shake caused global blurry, we also prove the generalization for the downstream
task suffering from local blur. Extensive experiments demonstrate we can
achieve the state-of-the-art performance on well-known REDS and GoPro datasets,
and bring machine task gain.
Related papers
- Towards Real-world Event-guided Low-light Video Enhancement and Deblurring [39.942568142125126]
Event cameras have emerged as a promising solution for improving image quality in low-light environments.
We introduce an end-to-end framework to effectively handle these tasks.
Our framework incorporates a module to efficiently leverage temporal information from events and frames.
arXiv Detail & Related papers (2024-08-27T09:44:54Z) - DaBiT: Depth and Blur informed Transformer for Joint Refocusing and Super-Resolution [4.332534893042983]
In many real-world scenarios, recorded videos suffer from accidental focus blur.
This paper introduces a framework optimised for focal deblurring (refocusing) and video super-resolution (VSR)
We achieve state-of-the-art results with an average PSNR performance over 1.9dB greater than comparable existing video restoration methods.
arXiv Detail & Related papers (2024-07-01T12:22:16Z) - DeblurGS: Gaussian Splatting for Camera Motion Blur [45.13521168573883]
We propose DeblurGS, a method to optimize sharp 3D Gaussian Splatting from motion-blurred images.
We restore a fine-grained sharp scene by leveraging the remarkable reconstruction capability of 3D Gaussian Splatting.
Our approach estimates the 6-Degree-of-Freedom camera motion for each blurry observation and synthesizes corresponding blurry renderings.
arXiv Detail & Related papers (2024-04-17T13:14:52Z) - Gaussian Splatting on the Move: Blur and Rolling Shutter Compensation for Natural Camera Motion [25.54868552979793]
We present a method that adapts to camera motion and allows high-quality scene reconstruction with handheld video data.
Our results with both synthetic and real data demonstrate superior performance in mitigating camera motion over existing methods.
arXiv Detail & Related papers (2024-03-20T06:19:41Z) - Lightweight High-Speed Photography Built on Coded Exposure and Implicit Neural Representation of Videos [34.152901518593396]
The demand for compact cameras capable of recording high-speed scenes with high resolution is steadily increasing.
However, achieving such capabilities often entails high bandwidth requirements, resulting in bulky, heavy systems unsuitable for low-capacity platforms.
We propose a novel approach to address these challenges by combining the classical coded exposure imaging technique with the emerging implicit neural representation for videos.
arXiv Detail & Related papers (2023-11-22T03:41:13Z) - Joint Video Multi-Frame Interpolation and Deblurring under Unknown
Exposure Time [101.91824315554682]
In this work, we aim ambitiously for a more realistic and challenging task - joint video multi-frame and deblurring under unknown exposure time.
We first adopt a variant of supervised contrastive learning to construct an exposure-aware representation from input blurred frames.
We then build our video reconstruction network upon the exposure and motion representation by progressive exposure-adaptive convolution and motion refinement.
arXiv Detail & Related papers (2023-03-27T09:43:42Z) - Towards Interpretable Video Super-Resolution via Alternating
Optimization [115.85296325037565]
We study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate blurry video.
We propose an interpretable STVSR framework by leveraging both model-based and learning-based methods.
arXiv Detail & Related papers (2022-07-21T21:34:05Z) - Render In-between: Motion Guided Video Synthesis for Action
Interpolation [53.43607872972194]
We propose a motion-guided frame-upsampling framework that is capable of producing realistic human motion and appearance.
A novel motion model is trained to inference the non-linear skeletal motion between frames by leveraging a large-scale motion-capture dataset.
Our pipeline only requires low-frame-rate videos and unpaired human motion data but does not require high-frame-rate videos for training.
arXiv Detail & Related papers (2021-11-01T15:32:51Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Exposure Trajectory Recovery from Motion Blur [90.75092808213371]
Motion blur in dynamic scenes is an important yet challenging research topic.
In this paper, we define exposure trajectories, which represent the motion information contained in a blurry image.
A novel motion offset estimation framework is proposed to model pixel-wise displacements of the latent sharp image.
arXiv Detail & Related papers (2020-10-06T05:23:33Z) - Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior [63.11478060678794]
We propose an effective motion-excited sampler to obtain motion-aware noise prior.
By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.
arXiv Detail & Related papers (2020-03-17T10:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.