Few-Shot Domain Adaptation for Low Light RAW Image Enhancement
- URL: http://arxiv.org/abs/2303.15528v1
- Date: Mon, 27 Mar 2023 18:10:52 GMT
- Title: Few-Shot Domain Adaptation for Low Light RAW Image Enhancement
- Authors: K. Ram Prabhakar, Vishal Vinod, Nihar Ranjan Sahoo, R. Venkatesh Babu
- Abstract summary: Enhancing practical low light raw images is a difficult task due to severe noise and color distortions from short exposure time and limited illumination.
We present a novel few-shot domain adaptation method to utilize the existing source camera labeled data with few labeled samples from the target camera.
Our experiments show that only ten or fewer labeled samples from the target camera domain are sufficient to achieve similar or better enhancement performance than training a model with a large labeled target camera dataset.
- Score: 41.135497703299315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Enhancing practical low light raw images is a difficult task due to severe
noise and color distortions from short exposure time and limited illumination.
Despite the success of existing Convolutional Neural Network (CNN) based
methods, their performance is not adaptable to different camera domains. In
addition, such methods also require large datasets with short-exposure and
corresponding long-exposure ground truth raw images for each camera domain,
which is tedious to compile. To address this issue, we present a novel few-shot
domain adaptation method to utilize the existing source camera labeled data
with few labeled samples from the target camera to improve the target domain's
enhancement quality in extreme low-light imaging. Our experiments show that
only ten or fewer labeled samples from the target camera domain are sufficient
to achieve similar or better enhancement performance than training a model with
a large labeled target camera dataset. To support research in this direction,
we also present a new low-light raw image dataset captured with a Nikon camera,
comprising short-exposure and their corresponding long-exposure ground truth
images.
Related papers
- Adaptive Domain Learning for Cross-domain Image Denoising [57.4030317607274]
We present a novel adaptive domain learning scheme for cross-domain image denoising.
We use existing data from different sensors (source domain) plus a small amount of data from the new sensor (target domain)
The ADL training scheme automatically removes the data in the source domain that are harmful to fine-tuning a model for the target domain.
Also, we introduce a modulation module to adopt sensor-specific information (sensor type and ISO) to understand input data for image denoising.
arXiv Detail & Related papers (2024-11-03T08:08:26Z) - Retinex-RAWMamba: Bridging Demosaicing and Denoising for Low-Light RAW Image Enhancement [71.13353154514418]
Low-light image enhancement, particularly in cross-domain tasks such as mapping from the raw domain to the sRGB domain, remains a significant challenge.
We present a novel Mamba scanning mechanism, called RAWMamba, to effectively handle raw images with different CFAs.
We also present a Retinex Decomposition Module (RDM) grounded in Retinex prior, which decouples illumination from reflectance to facilitate more effective denoising and automatic non-linear exposure correction.
arXiv Detail & Related papers (2024-09-11T06:12:03Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - Dimma: Semi-supervised Low Light Image Enhancement with Adaptive Dimming [0.728258471592763]
Enhancing low-light images while maintaining natural colors is a challenging problem due to camera processing variations.
We propose Dimma, a semi-supervised approach that aligns with any camera by utilizing a small set of image pairs.
We achieve that by introducing a convolutional mixture density network that generates distorted colors of the scene based on the illumination differences.
arXiv Detail & Related papers (2023-10-14T17:59:46Z) - Enhancing Low-Light Images Using Infrared-Encoded Images [81.8710581927427]
Previous arts mainly focus on the low-light images captured in the visible spectrum using pixel-wise loss.
We propose a novel approach to increase the visibility of images captured under low-light environments by removing the in-camera infrared (IR) cut-off filter.
arXiv Detail & Related papers (2023-07-09T08:29:19Z) - A ground-based dataset and a diffusion model for on-orbit low-light image enhancement [7.815138548685792]
We propose a dataset of the Beidou Navigation Satellite for on-orbit low-light image enhancement (LLIE)
To evenly sample poses of different orientation and distance without collision, a collision-free working space and pose stratified sampling is proposed.
To enhance the image contrast without over-exposure and blurring details, we design a fused attention to highlight the structure and dark region.
arXiv Detail & Related papers (2023-06-25T12:15:44Z) - Instance Segmentation in the Dark [43.85818645776587]
We take a deep look at instance segmentation in the dark and introduce several techniques that substantially boost the low-light inference accuracy.
We propose a novel learning method that relies on an adaptive weighted downsampling layer, a smooth-oriented convolutional block, and disturbance suppression learning.
We capture a real-world low-light instance segmentation dataset comprising over two thousand paired low/normal-light images with instance-level pixel-wise annotations.
arXiv Detail & Related papers (2023-04-27T16:02:29Z) - Human Pose Estimation in Extremely Low-Light Conditions [21.210706205233286]
We develop a dedicated camera system and build a new dataset of real low-light images with accurate pose labels.
Thanks to our camera system, each low-light image in our dataset is coupled with an aligned well-lit image, which enables accurate pose labeling.
We also propose a new model and a new training strategy that fully exploit the privileged information to learn representation insensitive to lighting conditions.
arXiv Detail & Related papers (2023-03-27T17:28:25Z) - High Dynamic Range and Super-Resolution from Raw Image Bursts [52.341483902624006]
This paper introduces the first approach to reconstruct high-resolution, high-dynamic range color images from raw photographic bursts captured by a handheld camera with exposure bracketing.
The proposed algorithm is fast, with low memory requirements compared to state-of-the-art learning-based approaches to image restoration.
Experiments demonstrate its excellent performance with super-resolution factors of up to $times 4$ on real photographs taken in the wild with hand-held cameras.
arXiv Detail & Related papers (2022-07-29T13:31:28Z) - PlenoptiCam v1.0: A light-field imaging framework [8.467466998915018]
Light-field cameras play a vital role for rich 3-D information retrieval in narrow range depth sensing applications.
Key obstacle in composing light-fields from exposures taken by a plenoptic camera is to calibrate computationally, align and rearrange four-dimensional image data.
Several attempts have been proposed to enhance the overall image quality by tailoring pipelines dedicated to particular plenoptic cameras.
arXiv Detail & Related papers (2020-10-14T09:23:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.