Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance
- URL: http://arxiv.org/abs/2410.14400v1
- Date: Fri, 18 Oct 2024 12:04:23 GMT
- Title: Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance
- Authors: Kang Chen, Shijun Yan, Aiwen Jiang, Han Li, Zhifeng Wang,
- Abstract summary: The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
- Score: 18.390543681127976
- License:
- Abstract: Bokeh rendering is one of the most popular techniques in photography. It can make photographs visually appealing, forcing users to focus their attentions on particular area of image. However, achieving satisfactory bokeh effect usually presents significant challenge, since mobile cameras with restricted optical systems are constrained, while expensive high-end DSLR lens with large aperture should be needed. Therefore, many deep learning-based computational photography methods have been developed to mimic the bokeh effect in recent years. Nevertheless, most of these methods were limited to rendering bokeh effect in certain single aperture. There lacks user-friendly bokeh rendering method that can provide precise focal plane control and customised bokeh generation. There as well lacks authentic realistic bokeh dataset that can potentially promote bokeh learning on variable apertures. To address these two issues, in this paper, we have proposed an effective controllable bokeh rendering method, and contributed a Variable Aperture Bokeh Dataset (VABD). In the proposed method, user can customize focal plane to accurately locate concerned subjects and select target aperture information for bokeh rendering. Experimental results on public EBB! benchmark dataset and our constructed dataset VABD have demonstrated that the customized focal plane together aperture prompt can bootstrap model to simulate realistic bokeh effect. The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models. The contributed dataset and source codes will be released on github https://github.com/MoTong-AI-studio/VABM.
Related papers
- BokehOrNot: Transforming Bokeh Effect with Image Transformer and Lens
Metadata Embedding [2.3784282912975345]
Bokeh effect is an optical phenomenon that offers a pleasant visual experience, typically generated by high-end cameras with wide aperture lenses.
We propose a novel universal method for embedding lens metadata into the model and introducing a loss calculation method using alpha masks.
Based on the above techniques, we propose the BokehOrNot model, which is capable of producing both blur-to-sharp and sharp-to-blur bokeh effect.
arXiv Detail & Related papers (2023-06-06T21:49:56Z) - Realistic Bokeh Effect Rendering on Mobile GPUs, Mobile AI & AIM 2022
challenge: Report [75.79829464552311]
This challenge was to develop an efficient end-to-end AI-based rendering approach that can run on modern smartphone models.
The resulting model was evaluated on the Kirin 9000's Mali GPU that provides excellent acceleration results for the majority of common deep learning ops.
arXiv Detail & Related papers (2022-11-07T22:42:02Z) - MC-Blur: A Comprehensive Benchmark for Image Deblurring [127.6301230023318]
In most real-world images, blur is caused by different factors, e.g., motion and defocus.
We construct a new large-scale multi-cause image deblurring dataset (called MC-Blur)
Based on the MC-Blur dataset, we conduct extensive benchmarking studies to compare SOTA methods in different scenarios.
arXiv Detail & Related papers (2021-12-01T02:10:42Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - AIM 2020 Challenge on Rendering Realistic Bokeh [95.87775182820518]
This paper reviews the second AIM realistic bokeh effect rendering challenge.
The goal was to learn a realistic shallow focus technique using a large-scale EBB! bokeh dataset.
The participants had to render bokeh effect based on only one single frame without any additional data from other cameras or sensors.
arXiv Detail & Related papers (2020-11-10T09:15:38Z) - BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering
Realistic Bokeh [19.752904494597328]
We propose a novel generator called Glass-Net, which generates bokeh images not relying on complex hardware.
Experiments show that our method is able to render a high-quality bokeh effect and process one $1024 times 1536$ pixel image in 1.9 seconds on all smartphone chipsets.
arXiv Detail & Related papers (2020-11-04T11:56:34Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Depth-aware Blending of Smoothed Images for Bokeh Effect Generation [10.790210744021072]
In this paper, an end-to-end deep learning framework is proposed to generate high-quality bokeh effect from images.
The network is lightweight and can process an HD image in 0.03 seconds.
This approach ranked second in AIM 2019 Bokeh effect challenge-Perceptual Track.
arXiv Detail & Related papers (2020-05-28T18:11:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.