DeFMO: Deblurring and Shape Recovery of Fast Moving Objects
- URL: http://arxiv.org/abs/2012.00595v3
- Date: Tue, 30 Mar 2021 09:14:34 GMT
- Title: DeFMO: Deblurring and Shape Recovery of Fast Moving Objects
- Authors: Denys Rozumnyi, Martin R. Oswald, Vittorio Ferrari, Jiri Matas, Marc
Pollefeys
- Abstract summary: generative model embeds an image of the blurred object into a latent space representation, disentangles the background, and renders the sharp appearance.
DeFMO outperforms the state of the art and generates high-quality temporal super-resolution frames.
- Score: 139.67524021201103
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Objects moving at high speed appear significantly blurred when captured with
cameras. The blurry appearance is especially ambiguous when the object has
complex shape or texture. In such cases, classical methods, or even humans, are
unable to recover the object's appearance and motion. We propose a method that,
given a single image with its estimated background, outputs the object's
appearance and position in a series of sub-frames as if captured by a
high-speed camera (i.e. temporal super-resolution). The proposed generative
model embeds an image of the blurred object into a latent space representation,
disentangles the background, and renders the sharp appearance. Inspired by the
image formation model, we design novel self-supervised loss function terms that
boost performance and show good generalization capabilities. The proposed DeFMO
method is trained on a complex synthetic dataset, yet it performs well on
real-world data from several datasets. DeFMO outperforms the state of the art
and generates high-quality temporal super-resolution frames.
Related papers
- Retrieval Robust to Object Motion Blur [54.34823913494456]
We propose a method for object retrieval in images that are affected by motion blur.
We present the first large-scale datasets for blurred object retrieval.
Our method outperforms state-of-the-art retrieval methods on the new blur-retrieval datasets.
arXiv Detail & Related papers (2024-04-27T23:22:39Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - FlexNeRF: Photorealistic Free-viewpoint Rendering of Moving Humans from
Sparse Views [71.77680030806513]
We present FlexNeRF, a method for photorealistic freeviewpoint rendering of humans in motion from monocular videos.
Our approach works well with sparse views, which is a challenging scenario when the subject is exhibiting fast/complex motions.
Thanks to our novel temporal and cyclic consistency constraints, our approach provides high quality outputs as the observed views become sparser.
arXiv Detail & Related papers (2023-03-25T05:47:08Z) - Learning Object-Centric Neural Scattering Functions for Free-Viewpoint
Relighting and Scene Composition [28.533032162292297]
We propose Object-Centric Neural Scattering Functions for learning to reconstruct object appearance from only images.
OSFs support free-viewpoint object relighting, but also can model both opaque and translucent objects.
Experiments on real and synthetic data show that OSFs accurately reconstruct appearances for both opaque and translucent objects.
arXiv Detail & Related papers (2023-03-10T18:55:46Z) - Event-based Non-Rigid Reconstruction from Contours [17.049602518532847]
We propose a novel approach for reconstructing such deformations using measurements from event-based cameras.
Under the assumption of a static background, where all events are generated by the motion, our approach estimates the deformation of objects from events generated at the object contour.
It associates events to mesh faces on the contour and maximizes the alignment of the line of sight through the event pixel with the associated face.
arXiv Detail & Related papers (2022-10-12T14:53:11Z) - Shape from Blur: Recovering Textured 3D Shape and Motion of Fast Moving
Objects [115.71874459429381]
We address the novel task of jointly reconstructing the 3D shape, texture, and motion of an object from a single motion-blurred image.
While previous approaches address the deblurring problem only in the 2D image domain, our proposed rigorous modeling of all object properties in the 3D domain enables the correct description of arbitrary object motion.
arXiv Detail & Related papers (2021-06-16T13:18:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.