Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels
- URL: http://arxiv.org/abs/2111.07837v1
- Date: Mon, 15 Nov 2021 15:23:55 GMT
- Title: Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels
- Authors: Abdullah Abuolaim and Mahmoud Afifi and Michael S. Brown
- Abstract summary: One of the primary effects applied to images captured in portrait mode is a synthetic shallow depth of field (DoF)
In this work, we follow the trend of rendering the NIMAT effect by introducing a modification on the blur synthesis procedure in portrait mode.
Our modification enables a high-quality synthesis of multi-view bokeh from a single image by applying rotated blurring kernels.
- Score: 48.063176079878055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Portrait mode is widely available on smartphone cameras to provide an
enhanced photographic experience. One of the primary effects applied to images
captured in portrait mode is a synthetic shallow depth of field (DoF). The
synthetic DoF (or bokeh effect) selectively blurs regions in the image to
emulate the effect of using a large lens with a wide aperture. In addition,
many applications now incorporate a new image motion attribute (NIMAT) to
emulate background motion, where the motion is correlated with estimated depth
at each pixel. In this work, we follow the trend of rendering the NIMAT effect
by introducing a modification on the blur synthesis procedure in portrait mode.
In particular, our modification enables a high-quality synthesis of multi-view
bokeh from a single image by applying rotated blurring kernels. Given the
synthesized multiple views, we can generate aesthetically realistic image
motion similar to the NIMAT effect. We validate our approach qualitatively
compared to the original NIMAT effect and other similar image motions, like
Facebook 3D image. Our image motion demonstrates a smooth image view transition
with fewer artifacts around the object boundary.
Related papers
- MBA-SLAM: Motion Blur Aware Dense Visual SLAM with Radiance Fields Representation [15.752529196306648]
We propose a dense visual SLAM pipeline (i.e. MBA-SLAM) to handle severe motion-blurred inputs.
Our approach integrates an efficient motion blur-aware tracker with either neural fields or Gaussian Splatting based mapper.
We show that MBA-SLAM surpasses previous state-of-the-art methods in both camera localization and map reconstruction.
arXiv Detail & Related papers (2024-11-13T01:38:06Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - GBSD: Generative Bokeh with Stage Diffusion [16.189787907983106]
The bokeh effect is an artistic technique that blurs out-of-focus areas in a photograph.
We present GBSD, the first generative text-to-image model that synthesizes photorealistic images with a bokeh style.
arXiv Detail & Related papers (2023-06-14T05:34:02Z) - Hybrid Neural Rendering for Large-Scale Scenes with Motion Blur [68.24599239479326]
We develop a hybrid neural rendering model that makes image-based representation and neural 3D representation join forces to render high-quality, view-consistent images.
Our model surpasses state-of-the-art point-based methods for novel view synthesis.
arXiv Detail & Related papers (2023-04-25T08:36:33Z) - Depth-Aware Image Compositing Model for Parallax Camera Motion Blur [4.170640862518009]
Camera motion introduces spatially varying blur due to the depth changes in the 3D world.
We present a simple, yet accurate, Image ting Blur (ICB) model for depth-dependent varying blur.
arXiv Detail & Related papers (2023-03-16T14:15:32Z) - DynIBaR: Neural Dynamic Image-Based Rendering [79.44655794967741]
We address the problem of synthesizing novel views from a monocular video depicting a complex dynamic scene.
We adopt a volumetric image-based rendering framework that synthesizes new viewpoints by aggregating features from nearby views.
We demonstrate significant improvements over state-of-the-art methods on dynamic scene datasets.
arXiv Detail & Related papers (2022-11-20T20:57:02Z) - A Method For Adding Motion-Blur on Arbitrary Objects By using
Auto-Segmentation and Color Compensation Techniques [6.982738885923204]
In this paper, an unified framework to add motion blur on per-object basis is proposed.
In the method, multiple frames are captured without motion blur and they are accumulated to create motion blur on target objects.
arXiv Detail & Related papers (2021-09-22T05:52:27Z) - Semantic View Synthesis [56.47999473206778]
We tackle a new problem of semantic view synthesis -- generating free-viewpoint rendering of a synthesized scene using a semantic label map as input.
First, we focus on synthesizing the color and depth of the visible surface of the 3D scene.
We then use the synthesized color and depth to impose explicit constraints on the multiple-plane image (MPI) representation prediction process.
arXiv Detail & Related papers (2020-08-24T17:59:46Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.