The Cube++ Illumination Estimation Dataset
- URL: http://arxiv.org/abs/2011.10028v1
- Date: Thu, 19 Nov 2020 18:50:08 GMT
- Title: The Cube++ Illumination Estimation Dataset
- Authors: Egor Ershov, Alex Savchik, Illya Semenkov, Nikola Bani\'c, Alexander
Belokopytov, Daria Senshina, Karlo Koscevi\'c, Marko Suba\v{s}i\'c, Sven
Lon\v{c}ari\'c
- Abstract summary: A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
- Score: 50.58610459038332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Computational color constancy has the important task of reducing the
influence of the scene illumination on the object colors. As such, it is an
essential part of the image processing pipelines of most digital cameras. One
of the important parts of the computational color constancy is illumination
estimation, i.e. estimating the illumination color. When an illumination
estimation method is proposed, its accuracy is usually reported by providing
the values of error metrics obtained on the images of publicly available
datasets. However, over time it has been shown that many of these datasets have
problems such as too few images, inappropriate image quality, lack of scene
diversity, absence of version tracking, violation of various assumptions, GDPR
regulation violation, lack of additional shooting procedure info, etc. In this
paper, a new illumination estimation dataset is proposed that aims to alleviate
many of the mentioned problems and to help the illumination estimation
research. It consists of 4890 images with known illumination colors as well as
with additional semantic data that can further make the learning process more
accurate. Due to the usage of the SpyderCube color target, for every image
there are two ground-truth illumination records covering different directions.
Because of that, the dataset can be used for training and testing of methods
that perform single or two-illuminant estimation. This makes it superior to
many similar existing datasets. The datasets, it's smaller version
SimpleCube++, and the accompanying code are available at
https://github.com/Visillect/CubePlusPlus/.
Related papers
- OpenIllumination: A Multi-Illumination Dataset for Inverse Rendering
Evaluation on Real Objects [56.065616159398324]
We introduce OpenIllumination, a real-world dataset containing over 108K images of 64 objects with diverse materials.
For each image in the dataset, we provide accurate camera parameters, illumination ground truth, and foreground segmentation masks.
arXiv Detail & Related papers (2023-09-14T17:59:53Z) - Beyond the Pixel: a Photometrically Calibrated HDR Dataset for Luminance
and Color Prediction [0.7456526005219319]
Laval Photometric Indoor HDR dataset is the first large-scale photometrically calibrated dataset of high dynamic range 360deg panoramas.
We do so by accurately capturing RAW bracketed exposures simultaneously with a professional photometric measurement device.
The resulting dataset is a rich representation of indoor scenes which displays a wide range of illuminance and color, and varied types of light sources.
arXiv Detail & Related papers (2023-04-24T18:10:25Z) - KinD-LCE Curve Estimation And Retinex Fusion On Low-Light Image [7.280719886684936]
This paper proposes an algorithm for low illumination enhancement.
KinD-LCE uses a light curve estimation module to enhance the illumination map in the Retinex decomposed image.
An illumination map and reflection map fusion module were also proposed to restore the image details and reduce detail loss.
arXiv Detail & Related papers (2022-07-19T11:49:21Z) - Generative Models for Multi-Illumination Color Constancy [23.511249515559122]
We propose a seed (physics driven) based multi-illumination color constancy method.
GANs are exploited to model the illumination estimation problem as an image-to-image domain translation problem.
Experiments on single and multi-illumination datasets show that our methods outperform sota methods.
arXiv Detail & Related papers (2021-09-02T12:24:40Z) - LLVIP: A Visible-infrared Paired Dataset for Low-light Vision [4.453060631960743]
We present LLVIP, a visible-infrared paired dataset for low-light vision.
This dataset contains 30976 images, or 15488 pairs, most of which were taken at very dark scenes.
We compare the dataset with other visible-infrared datasets and evaluate the performance of some popular visual algorithms.
arXiv Detail & Related papers (2021-08-24T16:29:17Z) - Degrade is Upgrade: Learning Degradation for Low-light Image Enhancement [52.49231695707198]
We investigate the intrinsic degradation and relight the low-light image while refining the details and color in two steps.
Inspired by the color image formulation, we first estimate the degradation from low-light inputs to simulate the distortion of environment illumination color, and then refine the content to recover the loss of diffuse illumination color.
Our proposed method has surpassed the SOTA by 0.95dB in PSNR on LOL1000 dataset and 3.18% in mAP on ExDark dataset.
arXiv Detail & Related papers (2021-03-19T04:00:27Z) - Illumination Estimation Challenge: experience of past two years [57.13714732760851]
The 2nd Illumination estimation challenge( IEC#2) was conducted.
The challenge had several tracks: general, indoor, and two-illuminant with each of them focusing on different parameters of the scenes.
Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest-like markup for the images from the Cube+ dataset that was used in IEC#1.
arXiv Detail & Related papers (2020-12-31T17:59:19Z) - Monte Carlo Dropout Ensembles for Robust Illumination Estimation [94.14796147340041]
Computational color constancy is a preprocessing step used in many camera systems.
We propose to aggregate different deep learning methods according to their output uncertainty.
The proposed framework leads to state-of-the-art performance on INTEL-TAU dataset.
arXiv Detail & Related papers (2020-07-20T13:56:14Z) - Scene relighting with illumination estimation in the latent space on an
encoder-decoder scheme [68.8204255655161]
In this report we present methods that we tried to achieve that goal.
Our models are trained on a rendered dataset of artificial locations with varied scene content, light source location and color temperature.
With this dataset, we used a network with illumination estimation component aiming to infer and replace light conditions in the latent space representation of the concerned scenes.
arXiv Detail & Related papers (2020-06-03T15:25:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.