NTIRE 2021 Depth Guided Image Relighting Challenge
- URL: http://arxiv.org/abs/2104.13365v1
- Date: Tue, 27 Apr 2021 17:53:32 GMT
- Title: NTIRE 2021 Depth Guided Image Relighting Challenge
- Authors: Majed El Helou and Ruofan Zhou and Sabine Susstrunk and Radu Timofte
- Abstract summary: In this paper, we review the NTIRE 2021 depth guided image relighting challenge.
We rely on the VIDIT dataset for each of our two challenge tracks, including depth information.
We had nearly 250 registered participants, leading to 18 confirmed team submissions in the final competition stage.
- Score: 80.4620366794261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image relighting is attracting increasing interest due to its various
applications. From a research perspective, image relighting can be exploited to
conduct both image normalization for domain adaptation, and also for data
augmentation. It also has multiple direct uses for photo montage and aesthetic
enhancement. In this paper, we review the NTIRE 2021 depth guided image
relighting challenge.
We rely on the VIDIT dataset for each of our two challenge tracks, including
depth information. The first track is on one-to-one relighting where the goal
is to transform the illumination setup of an input image (color temperature and
light source position) to the target illumination setup. In the second track,
the any-to-any relighting challenge, the objective is to transform the
illumination settings of the input image to match those of another guide image,
similar to style transfer. In both tracks, participants were given depth
information about the captured scenes. We had nearly 250 registered
participants, leading to 18 confirmed team submissions in the final competition
stage. The competitions, methods, and final results are presented in this
paper.
Related papers
- NTIRE 2024 Challenge on Low Light Image Enhancement: Methods and Results [140.18794489156704]
This paper reviews the NTIRE 2024 low light image enhancement challenge, highlighting the proposed solutions and results.
The aim of this challenge is to discover an effective network design or solution capable of generating brighter, clearer, and visually appealing results when dealing with a variety of conditions.
A notable total of 428 participants registered for the challenge, with 22 teams ultimately making valid submissions.
arXiv Detail & Related papers (2024-04-22T15:01:12Z) - Learning to Relight Portrait Images via a Virtual Light Stage and
Synthetic-to-Real Adaptation [76.96499178502759]
Relighting aims to re-illuminate the person in the image as if the person appeared in an environment with the target lighting.
Recent methods rely on deep learning to achieve high-quality results.
We propose a new approach that can perform on par with the state-of-the-art (SOTA) relighting methods without requiring a light stage.
arXiv Detail & Related papers (2022-09-21T17:15:58Z) - NTIRE 2021 Challenge on Image Deblurring [111.14036064783835]
We describe the challenge specifics and the evaluation results from the 2 competition tracks with the proposed solutions.
In each competition, there were 338 and 238 registered participants and in the final testing phase, 18 and 17 teams competed.
The winning methods demonstrate the state-of-the-art performance on the image deblurring task with the jointly combined artifacts.
arXiv Detail & Related papers (2021-04-30T09:12:53Z) - Illumination Estimation Challenge: experience of past two years [57.13714732760851]
The 2nd Illumination estimation challenge( IEC#2) was conducted.
The challenge had several tracks: general, indoor, and two-illuminant with each of them focusing on different parameters of the scenes.
Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest-like markup for the images from the Cube+ dataset that was used in IEC#1.
arXiv Detail & Related papers (2020-12-31T17:59:19Z) - AIM 2020: Scene Relighting and Illumination Estimation Challenge [130.35212468997]
This paper presents the novel VIDIT dataset used in the AIM 2020 challenge on virtual image relighting and illumination estimation.
The first track considered one-to-one relighting; the objective was to relight an input photo of a scene with a different color temperature and illuminant orientation.
The goal of the second track was to estimate illumination settings, namely the color temperature and orientation, from a given image.
arXiv Detail & Related papers (2020-09-27T09:16:43Z) - WDRN : A Wavelet Decomposed RelightNet for Image Relighting [6.731863717520707]
We propose a wavelet decomposed RelightNet called WDRN which is a novel encoder-decoder network employing wavelet based decomposition.
We also propose a novel loss function called gray loss that ensures efficient learning of gradient in illumination along different directions of the ground truth image.
arXiv Detail & Related papers (2020-09-14T18:23:10Z) - Deep Relighting Networks for Image Light Source Manipulation [37.15283682572421]
We propose a novel Deep Relighting Network (DRN) with three parts: 1) scene reconversion, 2) shadow prior estimation, and 3) re-renderer.
Experimental results show that the proposed method outperforms other possible methods, both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-08-19T07:03:23Z) - VIDIT: Virtual Image Dataset for Illumination Transfer [18.001635516017902]
We present a novel dataset, the Virtual Image dataset for Illumination Transfer (VIDIT)
VIDIT contains 300 virtual scenes used for training, where every scene is captured 40 times in total: from 8 equally-spaced azimuthal angles, each lit with 5 different illuminants.
arXiv Detail & Related papers (2020-05-11T21:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.