GamutMLP: A Lightweight MLP for Color Loss Recovery
- URL: http://arxiv.org/abs/2304.11743v1
- Date: Sun, 23 Apr 2023 20:26:11 GMT
- Title: GamutMLP: A Lightweight MLP for Color Loss Recovery
- Authors: Hoang M. Le, Brian Price, Scott Cohen, Michael S. Brown
- Abstract summary: GamutMLP takes approximately 2 seconds to optimize and requires only 23 KB of storage.
We demonstrate the effectiveness of our approach for color recovery and compare it with alternative strategies.
As part of this effort, we introduce a new color gamut dataset of 2200 wide-gamut/small-gamut images for training and testing.
- Score: 40.273821032576606
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cameras and image-editing software often process images in the wide-gamut
ProPhoto color space, encompassing 90% of all visible colors. However, when
images are encoded for sharing, this color-rich representation is transformed
and clipped to fit within the small-gamut standard RGB (sRGB) color space,
representing only 30% of visible colors. Recovering the lost color information
is challenging due to the clipping procedure. Inspired by neural implicit
representations for 2D images, we propose a method that optimizes a lightweight
multi-layer-perceptron (MLP) model during the gamut reduction step to predict
the clipped values. GamutMLP takes approximately 2 seconds to optimize and
requires only 23 KB of storage. The small memory footprint allows our GamutMLP
model to be saved as metadata in the sRGB image -- the model can be extracted
when needed to restore wide-gamut color values. We demonstrate the
effectiveness of our approach for color recovery and compare it with
alternative strategies, including pre-trained DNN-based gamut expansion
networks and other implicit neural representation methods. As part of this
effort, we introduce a new color gamut dataset of 2200 wide-gamut/small-gamut
images for training and testing. Our code and dataset can be found on the
project website: https://gamut-mlp.github.io.
Related papers
- You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Training Neural Networks on RAW and HDR Images for Restoration Tasks [59.41340420564656]
In this work, we test approaches on three popular image restoration applications: denoising, deblurring, and single-image super-resolution.
Our results indicate that neural networks train significantly better on HDR and RAW images represented in display color spaces.
This small change to the training strategy can bring a very substantial gain in performance, up to 10-15 dB.
arXiv Detail & Related papers (2023-12-06T17:47:16Z) - Beyond Learned Metadata-based Raw Image Reconstruction [86.1667769209103]
Raw images have distinct advantages over sRGB images, e.g., linearity and fine-grained quantization levels.
They are not widely adopted by general users due to their substantial storage requirements.
We propose a novel framework that learns a compact representation in the latent space, serving as metadata.
arXiv Detail & Related papers (2023-06-21T06:59:07Z) - COIN: COmpression with Implicit Neural representations [64.02694714768691]
We propose a new simple approach for image compression.
Instead of storing the RGB values for each pixel of an image, we store the weights of a neural network overfitted to the image.
arXiv Detail & Related papers (2021-03-03T10:58:39Z) - The Utility of Decorrelating Colour Spaces in Vector Quantised
Variational Autoencoders [1.7792264784100689]
We propose colour space conversion to enforce a network learning structured representations.
We trained several instances of VQ-VAE whose input is an image in one colour space, and its output in another.
arXiv Detail & Related papers (2020-09-30T07:44:01Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.