Affine-modeled video extraction from a single motion blurred image
- URL: http://arxiv.org/abs/2104.03777v1
- Date: Thu, 8 Apr 2021 13:59:14 GMT
- Title: Affine-modeled video extraction from a single motion blurred image
- Authors: Daoyu Li, Liheng Bian, and Jun Zhang
- Abstract summary: A motion-blurred image is the temporal average of multiple sharp frames over the exposure time.
In this work, we report a generalized video extraction method using the affine motion modeling.
Experiments on both public datasets and real captured data validate the state-of-the-art performance of the reported technique.
- Score: 3.0080996413230667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A motion-blurred image is the temporal average of multiple sharp frames over
the exposure time. Recovering these sharp video frames from a single blurred
image is nontrivial, due to not only its strong ill-posedness, but also various
types of complex motion in reality such as rotation and motion in depth. In
this work, we report a generalized video extraction method using the affine
motion modeling, enabling to tackle multiple types of complex motion and their
mixing. In its workflow, the moving objects are first segemented in the alpha
channel. This allows separate recovery of different objects with different
motion. Then, we reduce the variable space by modeling each video clip as a
series of affine transformations of a reference frame, and introduce the
$l0$-norm total variation regularization to attenuate the ringing artifact. The
differentiable affine operators are employed to realize gradient-descent
optimization of the affine model, which follows a novel coarse-to-fine strategy
to further reduce artifacts. As a result, both the affine parameters and sharp
reference image are retrieved. They are finally input into stepwise affine
transformation to recover the sharp video frames. The stepwise retrieval
maintains the nature to bypass the frame order ambiguity. Experiments on both
public datasets and real captured data validate the state-of-the-art
performance of the reported technique.
Related papers
- Shuffled Autoregression For Motion Interpolation [53.61556200049156]
This work aims to provide a deep-learning solution for the motion task.
We propose a novel framework, referred to as emphShuffled AutoRegression, which expands the autoregression to generate in arbitrary (shuffled) order.
We also propose an approach to constructing a particular kind of dependency graph, with three stages assembled into an end-to-end spatial-temporal motion Transformer.
arXiv Detail & Related papers (2023-06-10T07:14:59Z) - Animation from Blur: Multi-modal Blur Decomposition with Motion Guidance [83.25826307000717]
We study the challenging problem of recovering detailed motion from a single motion-red image.
Existing solutions to this problem estimate a single image sequence without considering the motion ambiguity for each region.
In this paper, we explicitly account for such motion ambiguity, allowing us to generate multiple plausible solutions all in sharp detail.
arXiv Detail & Related papers (2022-07-20T18:05:53Z) - Non-linear Motion Estimation for Video Frame Interpolation using
Space-time Convolutions [18.47978862083129]
Video frame aims to synthesize one or multiple frames between two consecutive frames in a video.
Some older works tackled this problem by assuming per-pixel linear motion between video frames.
We propose to approximate the per-pixel motion using a space-time convolution network that is able to adaptively select the motion model to be used.
arXiv Detail & Related papers (2022-01-27T09:49:23Z) - Image Animation with Keypoint Mask [0.0]
Motion transfer is the task of synthesizing future video frames of a single source image according to the motion from a given driving video.
In this work, we extract the structure from a keypoint heatmap, without an explicit motion representation.
Then, the structures from the image and the video are extracted to warp the image according to the video, by a deep generator.
arXiv Detail & Related papers (2021-12-20T11:35:06Z) - Recurrent Video Deblurring with Blur-Invariant Motion Estimation and
Pixel Volumes [14.384467317051831]
We propose two novel approaches to deblurring videos by effectively aggregating information from multiple video frames.
First, we present blur-invariant motion estimation learning to improve motion estimation accuracy between blurry frames.
Second, for motion compensation, instead of aligning frames by warping with estimated motions, we use a pixel volume that contains candidate sharp pixels to resolve motion estimation errors.
arXiv Detail & Related papers (2021-08-23T07:36:49Z) - Restoration of Video Frames from a Single Blurred Image with Motion
Understanding [69.90724075337194]
We propose a novel framework to generate clean video frames from a single motion-red image.
We formulate video restoration from a single blurred image as an inverse problem by setting clean image sequence and their respective motion as latent factors.
Our framework is based on anblur-decoder structure with spatial transformer network modules.
arXiv Detail & Related papers (2021-04-19T08:32:57Z) - ARVo: Learning All-Range Volumetric Correspondence for Video Deblurring [92.40655035360729]
Video deblurring models exploit consecutive frames to remove blurs from camera shakes and object motions.
We propose a novel implicit method to learn spatial correspondence among blurry frames in the feature space.
Our proposed method is evaluated on the widely-adopted DVD dataset, along with a newly collected High-Frame-Rate (1000 fps) dataset for Video Deblurring.
arXiv Detail & Related papers (2021-03-07T04:33:13Z) - Motion-blurred Video Interpolation and Extrapolation [72.3254384191509]
We present a novel framework for deblurring, interpolating and extrapolating sharp frames from a motion-blurred video in an end-to-end manner.
To ensure temporal coherence across predicted frames and address potential temporal ambiguity, we propose a simple, yet effective flow-based rule.
arXiv Detail & Related papers (2021-03-04T12:18:25Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.