BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens
Metadata Embedding
- URL: http://arxiv.org/abs/2306.04032v1
- Date: Tue, 6 Jun 2023 21:49:56 GMT
- Title: BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens
Metadata Embedding
- Authors: Zhihao Yang, Wenyi Lian, Siyuan Lai
- Abstract summary: Bokeh effect is an optical phenomenon that offers a pleasant visual experience, typically generated by high-end cameras with wide aperture lenses.
We propose a novel universal method for embedding lens metadata into the model and introducing a loss calculation method using alpha masks.
Based on the above techniques, we propose the BokehOrNot model, which is capable of producing both blur-to-sharp and sharp-to-blur bokeh effect.
- Score: 2.3784282912975345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bokeh effect is an optical phenomenon that offers a pleasant visual
experience, typically generated by high-end cameras with wide aperture lenses.
The task of bokeh effect transformation aims to produce a desired effect in one
set of lenses and apertures based on another combination. Current models are
limited in their ability to render a specific set of bokeh effects, primarily
transformations from sharp to blur. In this paper, we propose a novel universal
method for embedding lens metadata into the model and introducing a loss
calculation method using alpha masks from the newly released Bokeh Effect
Transformation Dataset(BETD) [3]. Based on the above techniques, we propose the
BokehOrNot model, which is capable of producing both blur-to-sharp and
sharp-to-blur bokeh effect with various combinations of lenses and aperture
sizes. Our proposed model outperforms current leading bokeh rendering and image
restoration models and renders visually natural bokeh effects. Our code is
available at: https://github.com/indicator0/bokehornot.
Related papers
- Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance [18.390543681127976]
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
arXiv Detail & Related papers (2024-10-18T12:04:23Z) - Adaptive Window Pruning for Efficient Local Motion Deblurring [81.35217764881048]
Local motion blur commonly occurs in real-world photography due to the mixing between moving objects and stationary backgrounds during exposure.
Existing image deblurring methods predominantly focus on global deblurring.
This paper aims to adaptively and efficiently restore high-resolution locally blurred images.
arXiv Detail & Related papers (2023-06-25T15:24:00Z) - GBSD: Generative Bokeh with Stage Diffusion [16.189787907983106]
The bokeh effect is an artistic technique that blurs out-of-focus areas in a photograph.
We present GBSD, the first generative text-to-image model that synthesizes photorealistic images with a bokeh style.
arXiv Detail & Related papers (2023-06-14T05:34:02Z) - Realistic Bokeh Effect Rendering on Mobile GPUs, Mobile AI & AIM 2022
challenge: Report [75.79829464552311]
This challenge was to develop an efficient end-to-end AI-based rendering approach that can run on modern smartphone models.
The resulting model was evaluated on the Kirin 9000's Mali GPU that provides excellent acceleration results for the majority of common deep learning ops.
arXiv Detail & Related papers (2022-11-07T22:42:02Z) - Multi-View Motion Synthesis via Applying Rotated Dual-Pixel Blur Kernels [48.063176079878055]
One of the primary effects applied to images captured in portrait mode is a synthetic shallow depth of field (DoF)
In this work, we follow the trend of rendering the NIMAT effect by introducing a modification on the blur synthesis procedure in portrait mode.
Our modification enables a high-quality synthesis of multi-view bokeh from a single image by applying rotated blurring kernels.
arXiv Detail & Related papers (2021-11-15T15:23:55Z) - AIM 2020 Challenge on Rendering Realistic Bokeh [95.87775182820518]
This paper reviews the second AIM realistic bokeh effect rendering challenge.
The goal was to learn a realistic shallow focus technique using a large-scale EBB! bokeh dataset.
The participants had to render bokeh effect based on only one single frame without any additional data from other cameras or sensors.
arXiv Detail & Related papers (2020-11-10T09:15:38Z) - BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering
Realistic Bokeh [19.752904494597328]
We propose a novel generator called Glass-Net, which generates bokeh images not relying on complex hardware.
Experiments show that our method is able to render a high-quality bokeh effect and process one $1024 times 1536$ pixel image in 1.9 seconds on all smartphone chipsets.
arXiv Detail & Related papers (2020-11-04T11:56:34Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Depth-aware Blending of Smoothed Images for Bokeh Effect Generation [10.790210744021072]
In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images.
The network is lightweight and can process an HD image in 0.03 seconds.
This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track.
arXiv Detail & Related papers (2020-05-28T18:11:05Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.