Improved Mapping Between Illuminations and Sensors for RAW Images
- URL: http://arxiv.org/abs/2508.14730v1
- Date: Wed, 20 Aug 2025 14:23:23 GMT
- Title: Improved Mapping Between Illuminations and Sensors for RAW Images
- Authors: Abhijith Punnappurath, Luxi Zhao, Hoang Le, Abdelrahman Abdelhamed, SaiKiran Kumar Tedla, Michael S. Brown,
- Abstract summary: We introduce a lightweight neural network approach for illumination and sensor mapping.<n>Our dataset has 390 illuminations, four cameras, and 18 scenes.<n>We demonstrate the utility of our approach on the downstream task of training a neural ISP.
- Score: 31.232463461185272
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: RAW images are unprocessed camera sensor output with sensor-specific RGB values based on the sensor's color filter spectral sensitivities. RAW images also incur strong color casts due to the sensor's response to the spectral properties of scene illumination. The sensor- and illumination-specific nature of RAW images makes it challenging to capture RAW datasets for deep learning methods, as scenes need to be captured for each sensor and under a wide range of illumination. Methods for illumination augmentation for a given sensor and the ability to map RAW images between sensors are important for reducing the burden of data capture. To explore this problem, we introduce the first-of-its-kind dataset comprising carefully captured scenes under a wide range of illumination. Specifically, we use a customized lightbox with tunable illumination spectra to capture several scenes with different cameras. Our illumination and sensor mapping dataset has 390 illuminations, four cameras, and 18 scenes. Using this dataset, we introduce a lightweight neural network approach for illumination and sensor mapping that outperforms competing methods. We demonstrate the utility of our approach on the downstream task of training a neural ISP. Link to project page: https://github.com/SamsungLabs/illum-sensor-mapping.
Related papers
- Towards RAW Object Detection in Diverse Conditions [65.30190654593842]
We introduce the AODRaw dataset, which offers 7,785 high-resolution real RAW images with 135,601 annotated instances spanning 62 categories.
We find that sRGB pre-training constrains the potential of RAW object detection due to the domain gap between sRGB and RAW.
We distill the knowledge from an off-the-shelf model pre-trained on the sRGB domain to assist RAW pre-training.
arXiv Detail & Related papers (2024-11-24T01:23:04Z) - BSRAW: Improving Blind RAW Image Super-Resolution [63.408484584265985]
We tackle blind image super-resolution in the RAW domain.
We design a realistic degradation pipeline tailored specifically for training models with raw sensor data.
Our BSRAW models trained with our pipeline can upscale real-scene RAW images and improve their quality.
arXiv Detail & Related papers (2023-12-24T14:17:28Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - Reversed Image Signal Processing and RAW Reconstruction. AIM 2022
Challenge Report [109.2135194765743]
This paper introduces the AIM 2022 Challenge on Reversed Image Signal Processing and RAW Reconstruction.
We aim to recover raw sensor images from the corresponding RGBs without metadata and, by doing this, "reverse" the ISP transformation.
arXiv Detail & Related papers (2022-10-20T10:43:53Z) - GenISP: Neural ISP for Low-Light Machine Cognition [19.444297600977546]
In low-light conditions, object detectors using raw image data are more robust than detectors using image data processed by an ISP pipeline.
We propose a minimal neural ISP pipeline for machine cognition, named GenISP, that explicitly incorporates Color Space Transformation to a device-independent color space.
arXiv Detail & Related papers (2022-05-07T17:17:24Z) - Learning Enriched Illuminants for Cross and Single Sensor Color
Constancy [182.4997117953705]
We propose cross-sensor self-supervised training to train the network.
We train the network by randomly sampling the artificial illuminants in a sensor-independent manner.
Experiments show that our cross-sensor model and single-sensor model outperform other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2022-03-21T15:45:35Z) - Semi-Supervised Raw-to-Raw Mapping [19.783856963405754]
The raw-RGB colors of a camera sensor vary due to the spectral sensitivity differences across different sensor makes and models.
We present a semi-supervised raw-to-raw mapping method trained on a small set of paired images alongside an unpaired set of images captured by each camera device.
arXiv Detail & Related papers (2021-06-25T21:01:45Z) - The Cube++ Illumination Estimation Dataset [50.58610459038332]
A new illumination estimation dataset is proposed in this paper.
It consists of 4890 images with known illumination colors as well as with additional semantic data.
The dataset can be used for training and testing of methods that perform single or two-illuminant estimation.
arXiv Detail & Related papers (2020-11-19T18:50:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.