Bokeh-Loss GAN: Multi-Stage Adversarial Training for Realistic
Edge-Aware Bokeh
- URL: http://arxiv.org/abs/2208.12343v1
- Date: Thu, 25 Aug 2022 20:57:07 GMT
- Title: Bokeh-Loss GAN: Multi-Stage Adversarial Training for Realistic
Edge-Aware Bokeh
- Authors: Brian Lee, Fei Lei, Huaijin Chen, and Alexis Baudron
- Abstract summary: We tackle the problem of monocular bokeh synthesis, where we attempt to render a shallow depth of field image from a single all-in-focus image.
Unlike in DSLR cameras, this effect can not be captured directly in mobile cameras due to the physical constraints of the mobile aperture.
We propose a network-based approach that is capable of rendering realistic monocular bokeh from single image inputs.
- Score: 3.8811606213997587
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we tackle the problem of monocular bokeh synthesis, where we
attempt to render a shallow depth of field image from a single all-in-focus
image. Unlike in DSLR cameras, this effect can not be captured directly in
mobile cameras due to the physical constraints of the mobile aperture. We thus
propose a network-based approach that is capable of rendering realistic
monocular bokeh from single image inputs. To do this, we introduce three new
edge-aware Bokeh Losses based on a predicted monocular depth map, that sharpens
the foreground edges while blurring the background. This model is then
finetuned using an adversarial loss to generate a realistic Bokeh effect.
Experimental results show that our approach is capable of generating a
pleasing, natural Bokeh effect with sharp edges while handling complicated
scenes.
Related papers
- ExBluRF: Efficient Radiance Fields for Extreme Motion Blurred Images [58.24910105459957]
We present ExBluRF, a novel view synthesis method for extreme motion blurred images.
Our approach consists of two main components: 6-DOF camera trajectory-based motion blur formulation and voxel-based radiance fields.
Compared with the existing works, our approach restores much sharper 3D scenes with the order of 10 times less training time and GPU memory consumption.
arXiv Detail & Related papers (2023-09-16T11:17:25Z) - Adaptive Window Pruning for Efficient Local Motion Deblurring [81.35217764881048]
Local motion blur commonly occurs in real-world photography due to the mixing between moving objects and stationary backgrounds during exposure.
Existing image deblurring methods predominantly focus on global deblurring.
This paper aims to adaptively and efficiently restore high-resolution locally blurred images.
arXiv Detail & Related papers (2023-06-25T15:24:00Z) - BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens
Metadata Embedding [2.3784282912975345]
Bokeh effect is an optical phenomenon that offers a pleasant visual experience, typically generated by high-end cameras with wide aperture lenses.
We propose a novel universal method for embedding lens metadata into the model and introducing a loss calculation method using alpha masks.
Based on the above techniques, we propose the BokehOrNot model, which is capable of producing both blur-to-sharp and sharp-to-blur bokeh effect.
arXiv Detail & Related papers (2023-06-06T21:49:56Z) - WildLight: In-the-wild Inverse Rendering with a Flashlight [77.31815397135381]
We propose a practical photometric solution for in-the-wild inverse rendering under unknown ambient lighting.
Our system recovers scene geometry and reflectance using only multi-view images captured by a smartphone.
We demonstrate by extensive experiments that our method is easy to implement, casual to set up, and consistently outperforms existing in-the-wild inverse rendering techniques.
arXiv Detail & Related papers (2023-03-24T17:59:56Z) - Bokeh Rendering Based on Adaptive Depth Calibration Network [13.537088629080122]
Bokeh rendering is a popular technique used in photography to create an aesthetically pleasing effect.
Mobile phones are not able to capture natural shallow depth-of-field photos.
We propose a novel method for bokeh rendering using the Vision Transformer, a recent and powerful deep learning architecture.
arXiv Detail & Related papers (2023-02-21T16:33:51Z) - Realistic Bokeh Effect Rendering on Mobile GPUs, Mobile AI & AIM 2022
challenge: Report [75.79829464552311]
This challenge was to develop an efficient end-to-end AI-based rendering approach that can run on modern smartphone models.
The resulting model was evaluated on the Kirin 9000's Mali GPU that provides excellent acceleration results for the majority of common deep learning ops.
arXiv Detail & Related papers (2022-11-07T22:42:02Z) - AIM 2020 Challenge on Rendering Realistic Bokeh [95.87775182820518]
This paper reviews the second AIM realistic bokeh effect rendering challenge.
The goal was to learn a realistic shallow focus technique using a large-scale EBB! bokeh dataset.
The participants had to render bokeh effect based on only one single frame without any additional data from other cameras or sensors.
arXiv Detail & Related papers (2020-11-10T09:15:38Z) - BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering
Realistic Bokeh [19.752904494597328]
We propose a novel generator called Glass-Net, which generates bokeh images not relying on complex hardware.
Experiments show that our method is able to render a high-quality bokeh effect and process one $1024 times 1536$ pixel image in 1.9 seconds on all smartphone chipsets.
arXiv Detail & Related papers (2020-11-04T11:56:34Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Depth-aware Blending of Smoothed Images for Bokeh Effect Generation [10.790210744021072]
In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images.
The network is lightweight and can process an HD image in 0.03 seconds.
This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track.
arXiv Detail & Related papers (2020-05-28T18:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.