AIM 2020 Challenge on Rendering Realistic Bokeh
- URL: http://arxiv.org/abs/2011.04988v1
- Date: Tue, 10 Nov 2020 09:15:38 GMT
- Title: AIM 2020 Challenge on Rendering Realistic Bokeh
- Authors: Andrey Ignatov, Radu Timofte, Ming Qian, Congyu Qiao, Jiamin Lin,
Zhenyu Guo, Chenghua Li, Cong Leng, Jian Cheng, Juewen Peng, Xianrui Luo, Ke
Xian, Zijin Wu, Zhiguo Cao, Densen Puthussery, Jiji C V, Hrishikesh P S,
Melvin Kuriakose, Saikat Dutta, Sourya Dipta Das, Nisarg A. Shah, Kuldeep
Purohit, Praveen Kandula, Maitreya Suin, A. N. Rajagopalan, Saagara M B,
Minnu A L, Sanjana A R, Praseeda S, Ge Wu, Xueqin Chen, Tengyao Wang, Max
Zheng, Hulk Wong, Jay Zou
- Abstract summary: This paper reviews the second AIM realistic bokeh effect rendering challenge.
The goal was to learn a realistic shallow focus technique using a large-scale EBB! bokeh dataset.
The participants had to render bokeh effect based on only one single frame without any additional data from other cameras or sensors.
- Score: 95.87775182820518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper reviews the second AIM realistic bokeh effect rendering challenge
and provides the description of the proposed solutions and results. The
participating teams were solving a real-world bokeh simulation problem, where
the goal was to learn a realistic shallow focus technique using a large-scale
EBB! bokeh dataset consisting of 5K shallow / wide depth-of-field image pairs
captured using the Canon 7D DSLR camera. The participants had to render bokeh
effect based on only one single frame without any additional data from other
cameras or sensors. The target metric used in this challenge combined the
runtime and the perceptual quality of the solutions measured in the user study.
To ensure the efficiency of the submitted models, we measured their runtime on
standard desktop CPUs as well as were running the models on smartphone GPUs.
The proposed solutions significantly improved the baseline results, defining
the state-of-the-art for practical bokeh effect rendering problem.
Related papers
- Variable Aperture Bokeh Rendering via Customized Focal Plane Guidance [18.390543681127976]
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
The proposed method has achieved competitive state-of-the-art performance with only 4.4M parameters, which is much lighter than mainstream computational bokeh models.
arXiv Detail & Related papers (2024-10-18T12:04:23Z) - Realistic Bokeh Effect Rendering on Mobile GPUs, Mobile AI & AIM 2022
challenge: Report [75.79829464552311]
This challenge was to develop an efficient end-to-end AI-based rendering approach that can run on modern smartphone models.
The resulting model was evaluated on the Kirin 9000's Mali GPU that provides excellent acceleration results for the majority of common deep learning ops.
arXiv Detail & Related papers (2022-11-07T22:42:02Z) - Efficient Single-Image Depth Estimation on Mobile Devices, Mobile AI &
AIM 2022 Challenge: Report [108.88637766066759]
Deep learning-based single image depth estimation solutions can show a real-time performance on IoT platforms and smartphones.
Models developed in the challenge are also compatible with any Android or Linux-based mobile devices.
arXiv Detail & Related papers (2022-11-07T22:20:07Z) - Natural & Adversarial Bokeh Rendering via Circle-of-Confusion Predictive
Network [25.319666328268116]
Bokeh effect is a shallow depth-of-field phenomenon that blurs out-of-focus part in photography.
We study a totally new problem, i.e., natural & adversarial bokeh rendering.
We propose a hybrid alternative by taking the respective advantages of data-driven and physical-aware methods.
arXiv Detail & Related papers (2021-11-25T09:08:45Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - AIM 2020 Challenge on Learned Image Signal Processing Pipeline [150.55468168329926]
This paper reviews the second AIM learned ISP challenge and provides the description of the proposed solutions and results.
The participating teams were solving a real-world RAW-to-RGB mapping problem, where to goal was to map the original low-quality RAW images captured by the Huawei P20 device to the same photos obtained with the Canon 5D DSLR camera.
The proposed solutions significantly improved the baseline results, defining the state-of-the-art for practical image signal processing pipeline modeling.
arXiv Detail & Related papers (2020-11-10T09:25:47Z) - BGGAN: Bokeh-Glass Generative Adversarial Network for Rendering
Realistic Bokeh [19.752904494597328]
We propose a novel generator called Glass-Net, which generates bokeh images not relying on complex hardware.
Experiments show that our method is able to render a high-quality bokeh effect and process one $1024 times 1536$ pixel image in 1.9 seconds on all smartphone chipsets.
arXiv Detail & Related papers (2020-11-04T11:56:34Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.