Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy
- URL: http://arxiv.org/abs/2203.11068v1
- Date: Mon, 21 Mar 2022 15:45:35 GMT
- Title: Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy
- Authors: Xiaodong Cun, Zhendong Wang, Chi-Man Pun, Jianzhuang Liu, Wengang
Zhou, Xu Jia, Houqiang Li
- Abstract summary: We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
- Score: 182.4997117953705
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Color constancy aims to restore the constant colors of a scene under
different illuminants. However, due to the existence of camera spectral
sensitivity, the network trained on a certain sensor, cannot work well on
others. Also, since the training datasets are collected in certain
environments, the diversity of illuminants is limited for complex real world
prediction. In this paper, we tackle these problems via two aspects. First, we
propose cross-sensor self-supervised training to train the network. In detail,
we consider both the general sRGB images and the white-balanced RAW images from
current available datasets as the white-balanced agents. Then, we train the
network by randomly sampling the artificial illuminants in a sensor-independent
manner for scene relighting and supervision. Second, we analyze a previous
cascaded framework and present a more compact and accurate model by sharing the
backbone parameters with learning attention specifically. Experiments show that
our cross-sensor model and single-sensor model outperform other
state-of-the-art methods by a large margin on cross and single sensor
evaluations, respectively, with only 16% parameters of the previous best model.
Related papers
- The Change You Want to See (Now in 3D) [65.61789642291636]
The goal of this paper is to detect what has changed, if anything, between two "in the wild" images of the same 3D scene.
We contribute a change detection model that is trained entirely on synthetic data and is class-agnostic.
We release a new evaluation dataset consisting of real-world image pairs with human-annotated differences.
arXiv Detail & Related papers (2023-08-21T01:59:45Z) - Reversed Image Signal Processing and RAW Reconstruction. AIM 2022
Challenge Report [109.2135194765743]
This paper introduces the AIM 2022 Challenge on Reversed Image Signal Processing and RAW Reconstruction.
We aim to recover raw sensor images from the corresponding RGBs without metadata and, by doing this, "reverse" the ISP transformation.
arXiv Detail & Related papers (2022-10-20T10:43:53Z) - GenISP: Neural ISP for Low-Light Machine Cognition [19.444297600977546]
In low-light conditions, object detectors using raw image data are more robust than detectors using image data processed by an ISP pipeline.
We propose a minimal neural ISP pipeline for machine cognition, named GenISP, that explicitly incorporates Color Space Transformation to a device-independent color space.
arXiv Detail & Related papers (2022-05-07T17:17:24Z) - Transform your Smartphone into a DSLR Camera: Learning the ISP in the
Wild [159.71025525493354]
We propose a trainable Image Signal Processing framework that produces DSLR quality images given RAW images captured by a smartphone.
To address the color misalignments between training image pairs, we employ a color-conditional ISP network and optimize a novel parametric color mapping between each input RAW and reference DSLR image.
arXiv Detail & Related papers (2022-03-20T20:13:59Z) - Model-Based Image Signal Processors via Learnable Dictionaries [6.766416093990318]
Digital cameras transform sensor RAW readings into RGB images by means of their Image Signal Processor (ISP)
Recent approaches have attempted to bridge this gap by estimating the RGB to RAW mapping.
We present a novel hybrid model-based and data-driven ISP that is both learnable and interpretable.
arXiv Detail & Related papers (2022-01-10T08:36:10Z) - Semi-Supervised Raw-to-Raw Mapping [19.783856963405754]
The raw-RGB colors of a camera sensor vary due to the spectral sensitivity differences across different sensor makes and models.
We present a semi-supervised raw-to-raw mapping method trained on a small set of paired images alongside an unpaired set of images captured by each camera device.
arXiv Detail & Related papers (2021-06-25T21:01:45Z) - Illumination Estimation Challenge: experience of past two years [57.13714732760851]
The 2nd Illumination estimation challenge( IEC#2) was conducted.
The challenge had several tracks: general, indoor, and two-illuminant with each of them focusing on different parameters of the scenes.
Other main features of it are a new large dataset of images (about 5000) taken with the same camera sensor model, a manual markup accompanying each image, diverse content with scenes taken in numerous countries under a huge variety of illuminations extracted by using the SpyderCube calibration object, and a contest-like markup for the images from the Cube+ dataset that was used in IEC#1.
arXiv Detail & Related papers (2020-12-31T17:59:19Z) - Polarization-driven Semantic Segmentation via Efficient
Attention-bridged Fusion [6.718162142201631]
We present EAFNet, an Efficient Attention-bridged Fusion Network to exploit complementary information coming from different optical sensors.
We build the first RGB-P dataset which consists of 394 annotated pixel-aligned RGB-Polarization images.
A comprehensive variety of experiments shows the effectiveness of EAFNet to fuse polarization and RGB information.
arXiv Detail & Related papers (2020-11-26T14:32:42Z) - Learning Camera Miscalibration Detection [83.38916296044394]
This paper focuses on a data-driven approach to learn the detection of miscalibration in vision sensors, specifically RGB cameras.
Our contributions include a proposed miscalibration metric for RGB cameras and a novel semi-synthetic dataset generation pipeline based on this metric.
By training a deep convolutional neural network, we demonstrate the effectiveness of our pipeline to identify whether a recalibration of the camera's intrinsic parameters is required or not.
arXiv Detail & Related papers (2020-05-24T10:32:49Z) - A Multi-Hypothesis Approach to Color Constancy [22.35581217222978]
Current approaches frame the color constancy problem as learning camera specific illuminant mappings.
We propose a Bayesian framework that naturally handles color constancy ambiguity via a multi-hypothesis strategy.
Our method provides state-of-the-art accuracy on multiple public datasets while maintaining real-time execution.
arXiv Detail & Related papers (2020-02-28T18:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.