Optimized $k$-means color quantization of digital images in machine-based and human perception-based colorspaces
- URL: http://arxiv.org/abs/2601.19117v2
- Date: Thu, 05 Feb 2026 15:09:16 GMT
- Title: Optimized $k$-means color quantization of digital images in machine-based and human perception-based colorspaces
- Authors: Ranjan Maitra,
- Abstract summary: We investigate the performance of the $k$-means algorithm at four quantization levels in the RGB, CIE-XYZ, and CIE-LUV/CIE-HCL colorspaces.<n>In about half of the cases, $k$-means color quantization is best in the RGB space.<n>There are also some cases, especially at lower $k$, where the best performance is obtained in the CIE-LUV colorspace.
- Score: 2.3859169601259347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Color quantization represents an image using a fraction of its original number of colors while only minimally losing its visual quality. The $k$-means algorithm is commonly used in this context, but has mostly been applied in the machine-based RGB colorspace composed of the three primary colors. However, some recent studies have indicated its improved performance in human perception-based colorspaces. We investigated the performance of $k$-means color quantization at four quantization levels in the RGB, CIE-XYZ, and CIE-LUV/CIE-HCL colorspaces, on 148 varied digital images spanning a wide range of scenes, subjects and settings. The Visual Information Fidelity (VIF) measure numerically assessed the quality of the quantized images, and showed that in about half of the cases, $k$-means color quantization is best in the RGB space, while at other times, and especially for higher quantization levels ($k$), the CIE-XYZ colorspace is where it usually does better. There are also some cases, especially at lower $k$, where the best performance is obtained in the CIE-LUV colorspace. Further analysis of the performances in terms of the distributions of the hue, chromaticity and luminance in an image presents a nuanced perspective and characterization of the images for which each colorspace is better for $k$-means color quantization.
Related papers
- HVI: A New Color Space for Low-light Image Enhancement [58.8280819306909]
We propose a new color space for Low-Light Image Enhancement (LLIE) based on Horizontal/Vertical-Intensity (HVI)<n>HVI is defined by polarized HS maps and learnable intensity, while the latter compresses the low-light regions to remove the black artifacts.<n>To fully leverage the chromatic and intensity information, a novel Color and Intensity Decoupling Network (CIDNet) is introduced.
arXiv Detail & Related papers (2025-02-27T16:59:51Z) - Training Neural Networks on RAW and HDR Images for Restoration Tasks [53.84872583527721]
We study how neural networks should be trained for tasks on RAW and HDR images in linear color spaces.<n>Our results indicate that neural networks train significantly better on HDR and RAW images represented in color spaces.<n>This small change to the training strategy can bring a very substantial gain in performance, between 2 and 9 dB.
arXiv Detail & Related papers (2023-12-06T17:47:16Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - Learning to Structure an Image with Few Colors and Beyond [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
arXiv Detail & Related papers (2022-08-17T17:59:15Z) - Deep Metric Color Embeddings for Splicing Localization in Severely
Degraded Images [10.091921099426294]
We explore an alternative approach to splicing detection, which is potentially better suited for images in-the-wild.
We learn a deep metric space that is on one hand sensitive to illumination color and camera white-point estimation, but on the other hand insensitive to variations in object color.
In our evaluation, we show that the proposed embedding space outperforms the state of the art on images that have been subject to strong compression and downsampling.
arXiv Detail & Related papers (2022-06-21T21:28:40Z) - Influence of Color Spaces for Deep Learning Image Colorization [2.3705923859070217]
Existing colorization methods rely on different color spaces: RGB, YUV, Lab, etc.
In this chapter, we aim to study their influence on the results obtained by training a deep neural network.
We compare the results obtained with the same deep neural network architecture with RGB, YUV and Lab color spaces.
arXiv Detail & Related papers (2022-04-06T14:14:07Z) - The Utility of Decorrelating Colour Spaces in Vector Quantised
Variational Autoencoders [1.7792264784100689]
We propose colour space conversion to enforce a network learning structured representations.
We trained several instances of VQ-VAE whose input is an image in one colour space, and its output in another.
arXiv Detail & Related papers (2020-09-30T07:44:01Z) - Probabilistic Color Constancy [88.85103410035929]
We define a framework for estimating the illumination of a scene by weighting the contribution of different image regions.
The proposed method achieves competitive performance, compared to the state-of-the-art, on INTEL-TAU dataset.
arXiv Detail & Related papers (2020-05-06T11:03:05Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.