Illumination Estimation Challenge: experience of past two years
- URL: http://arxiv.org/abs/2012.15779v1
- Date: Thu, 31 Dec 2020 17:59:19 GMT
- Title: Illumination Estimation Challenge: experience of past two years
- Authors: Egor Ershov, Alex Savchik, Ilya Semenkov, Nikola Bani\'c, Karlo
Koscevi\'c, Marko Suba\v{s}i\'c, Alexander Belokopytov, Zhihao Li, Arseniy
Terekhin, Daria Senshina, Artem Nikonorov, Yanlin Qian, Marco Buzzelli,
Riccardo Riva, Simone Bianco, Raimondo Schettini, Sven Lon\v{c}ari\'c, Dmitry
Nikolaev
- Abstract summary: The 2nd Illumination estimation challenge( IEC#2) was conducted.
The challenge had several tracks: general, indoor, and two-illuminant with each of them focusing on different parameters of the scenes.
Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest-like markup for the images from the Cube+ dataset that was used in IEC#1.
- Score: 57.13714732760851
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Illumination estimation is the essential step of computational color
constancy, one of the core parts of various image processing pipelines of
modern digital cameras. Having an accurate and reliable illumination estimation
is important for reducing the illumination influence on the image colors. To
motivate the generation of new ideas and the development of new algorithms in
this field, the 2nd Illumination estimation challenge~(IEC\#2) was conducted.
The main advantage of testing a method on a challenge over testing in on some
of the known datasets is the fact that the ground-truth illuminations for the
challenge test images are unknown up until the results have been submitted,
which prevents any potential hyperparameter tuning that may be biased.
The challenge had several tracks: general, indoor, and two-illuminant with
each of them focusing on different parameters of the scenes. Other main
features of it are a new large dataset of images (about 5000) taken with the
same camera sensor model, a manual markup accompanying each image, diverse
content with scenes taken in numerous countries under a huge variety of
illuminations extracted by using the SpyderCube calibration object, and a
contest-like markup for the images from the Cube+ dataset that was used in
IEC\#1.
This paper focuses on the description of the past two challenges, algorithms
which won in each track, and the conclusions that were drawn based on the
results obtained during the 1st and 2nd challenge that can be useful for
similar future developments.
Related papers
- Exposure Bracketing is All You Need for Unifying Image Restoration and Enhancement Tasks [50.822601495422916]
We propose to utilize exposure bracketing photography to unify image restoration and enhancement tasks.
Due to the difficulty in collecting real-world pairs, we suggest a solution that first pre-trains the model with synthetic paired data.
In particular, a temporally modulated recurrent network (TMRNet) and self-supervised adaptation method are proposed.
arXiv Detail & Related papers (2024-01-01T14:14:35Z) - Joint Demosaicing and Denoising with Double Deep Image Priors [5.3686304202729]
Demosaicing and denoising RAW images are crucial steps in the processing pipeline of modern digital cameras.
Recent deep neural-network-based approaches have shown the effectiveness of joint demosaicing and denoising to mitigate such challenges.
We propose a novel joint demosaicing and denoising method, dubbed JDD-DoubleDIP, which operates directly on a single RAW image without requiring any training data.
arXiv Detail & Related papers (2023-09-18T01:53:10Z) - Low-Light Image and Video Enhancement: A Comprehensive Survey and Beyond [8.355226305081835]
This paper presents a comprehensive survey of low-light image and video enhancement, addressing two primary challenges in the field.
The first challenge is the prevalence of mixed over-/under-exposed images, which are not adequately addressed by existing methods.
The second challenge is the scarcity of suitable low-light video datasets for training and testing.
arXiv Detail & Related papers (2022-12-21T05:08:37Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - NTIRE 2021 Depth Guided Image Relighting Challenge [80.4620366794261]
In this paper, we review the NTIRE 2021 depth guided image relighting challenge.
We rely on the VIDIT dataset for each of our two challenge tracks, including depth information.
We had nearly 250 registered participants, leading to 18 confirmed team submissions in the final competition stage.
arXiv Detail & Related papers (2021-04-27T17:53:32Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z) - NTIRE 2020 Challenge on Real Image Denoising: Dataset, Methods and
Results [181.2861509946241]
This paper reviews the NTIRE 2020 challenge on real image denoising with focus on the newly introduced dataset.
The challenge is a new version of the previous NTIRE 2019 challenge on real image denoising that was based on the SIDD benchmark.
arXiv Detail & Related papers (2020-05-08T15:46:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.