BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering
Realistic Bokeh
- URL: http://arxiv.org/abs/2011.02242v1
- Date: Wed, 4 Nov 2020 11:56:34 GMT
- Title: BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering
Realistic Bokeh
- Authors: Ming Qian, Congyu Qiao, Jiamin Lin, Zhenyu Guo, Chenghua Li, Cong
Leng, Jian Cheng
- Abstract summary: We propose a novel generator called Glass-Net, which generates bokeh images not relying on complex hardware.
Experiments show that our method is able to render a high-quality bokeh effect and process one $1024 times 1536$ pixel image in 1.9 seconds on all smartphone chipsets.
- Score: 19.752904494597328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A photo captured with bokeh effect often means objects in focus are sharp
while the out-of-focus areas are all blurred. DSLR can easily render this kind
of effect naturally. However, due to the limitation of sensors, smartphones
cannot capture images with depth-of-field effects directly. In this paper, we
propose a novel generator called Glass-Net, which generates bokeh images not
relying on complex hardware. Meanwhile, the GAN-based method and perceptual
loss are combined for rendering a realistic bokeh effect in the stage of
finetuning the model. Moreover, Instance Normalization(IN) is reimplemented in
our network, which ensures our tflite model with IN can be accelerated on
smartphone GPU. Experiments show that our method is able to render a
high-quality bokeh effect and process one $1024 \times 1536$ pixel image in 1.9
seconds on all smartphone chipsets. This approach ranked First in AIM 2020
Rendering Realistic Bokeh Challenge Track 1 \& Track 2.
Related papers
- Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance [18.390543681127976]
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
arXiv Detail & Related papers (2024-10-18T12:04:23Z) - EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis [72.53316783628803]
We present Exact Volumetric Ellipsoid Rendering (EVER), a method for real-time differentiable emission-only volume rendering.
Unlike recentization based approach by 3D Gaussian Splatting (3DGS), our primitive based representation allows for exact volume rendering.
We show that our method is more accurate with blending issues than 3DGS and follow-up work on view rendering.
arXiv Detail & Related papers (2024-10-02T17:59:09Z) - ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens
Metadata Embedding [2.3784282912975345]
Bokeh effect is an optical phenomenon that offers a pleasant visual experience, typically generated by high-end cameras with wide aperture lenses.
We propose a novel universal method for embedding lens metadata into the model and introducing a loss calculation method using alpha masks.
Based on the above techniques, we propose the BokehOrNot model, which is capable of producing both blur-to-sharp and sharp-to-blur bokeh effect.
arXiv Detail & Related papers (2023-06-06T21:49:56Z) - Realistic Bokeh Effect Rendering on Mobile GPUs, Mobile AI & AIM 2022
challenge: Report [75.79829464552311]
This challenge was to develop an efficient end-to-end AI-based rendering approach that can run on modern smartphone models.
The resulting model was evaluated on the Kirin 9000's Mali GPU that provides excellent acceleration results for the majority of common deep learning ops.
arXiv Detail & Related papers (2022-11-07T22:42:02Z) - Bokeh-Loss GAN: Multi-Stage Adversarial Training for Realistic
Edge-Aware Bokeh [3.8811606213997587]
We tackle the problem of monocular bokeh synthesis, where we attempt to render a shallow depth of field image from a single all-in-focus image.
Unlike in DSLR cameras, this effect can not be captured directly in mobile cameras due to the physical constraints of the mobile aperture.
We propose a network-based approach that is capable of rendering realistic monocular bokeh from single image inputs.
arXiv Detail & Related papers (2022-08-25T20:57:07Z) - Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels [48.063176079878055]
One of the primary effects applied to images captured in portrait mode is a synthetic shallow depth of field (DoF)
In this work, we follow the trend of rendering the NIMAT effect by introducing a modification on the blur synthesis procedure in portrait mode.
Our modification enables a high-quality synthesis of multi-view bokeh from a single image by applying rotated blurring kernels.
arXiv Detail & Related papers (2021-11-15T15:23:55Z) - AIM 2020 Challenge on Rendering Realistic Bokeh [95.87775182820518]
This paper reviews the second AIM realistic bokeh effect rendering challenge.
The goal was to learn a realistic shallow focus technique using a large-scale EBB! bokeh dataset.
The participants had to render bokeh effect based on only one single frame without any additional data from other cameras or sensors.
arXiv Detail & Related papers (2020-11-10T09:15:38Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Depth-aware Blending of Smoothed Images for Bokeh Effect Generation [10.790210744021072]
In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images.
The network is lightweight and can process an HD image in 0.03 seconds.
This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track.
arXiv Detail & Related papers (2020-05-28T18:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.