Color Universal Design Neural Network for the Color Vision Deficiencies
- URL: http://arxiv.org/abs/2502.08671v1
- Date: Wed, 12 Feb 2025 01:53:15 GMT
- Title: Color Universal Design Neural Network for the Color Vision Deficiencies
- Authors: Sunyong Seo, Jinho Park,
- Abstract summary: We propose a color universal design network, called CUD-Net, that generates images that are visually understandable by individuals with color deficiency.
CUD-Net is a convolutional deep neural network that can preserve color and distinguish colors for input images.
Our approach is able to produce high-quality CUD images that maintain color and contrast stability.
- Score: 0.7366405857677227
- License:
- Abstract: Information regarding images should be visually understood by anyone, including those with color deficiency. However, such information is not recognizable if the color that seems to be distorted to the color deficiencies meets an adjacent object. The aim of this paper is to propose a color universal design network, called CUD-Net, that generates images that are visually understandable by individuals with color deficiency. CUD-Net is a convolutional deep neural network that can preserve color and distinguish colors for input images by regressing the node point of a piecewise linear function and using a specific filter for each image. To generate CUD images for color deficiencies, we follow a four-step process. First, we refine the CUD dataset based on specific criteria by color experts. Second, we expand the input image information through pre-processing that is specialized for color deficiency vision. Third, we employ a multi-modality fusion architecture to combine features and process the expanded images. Finally, we propose a conjugate loss function based on the composition of the predicted image through the model to address one-to-many problems that arise from the dataset. Our approach is able to produce high-quality CUD images that maintain color and contrast stability. The code for CUD-Net is available on the GitHub repository
Related papers
- Color-Oriented Redundancy Reduction in Dataset Distillation [35.83163170289415]
We propose a framework that minimizes color redundancy at the individual image and overall dataset levels.
At the image level, we employ a palette network, a specialized neural network, to dynamically allocate colors from a reduced color space to each pixel.
A comprehensive performance study is conducted, demonstrating the superior performance of our proposed color-aware DD compared to existing DD methods.
arXiv Detail & Related papers (2024-11-18T06:48:11Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - DDColor: Towards Photo-Realistic Image Colorization via Dual Decoders [19.560271615736212]
DDColor is an end-to-end method with dual decoders for image colorization.
Our approach includes a pixel decoder and a query-based color decoder.
Our two decoders work together to establish correlations between color and multi-scale semantic representations.
arXiv Detail & Related papers (2022-12-22T11:17:57Z) - Learning to Structure an Image with Few Colors and Beyond [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
arXiv Detail & Related papers (2022-08-17T17:59:15Z) - ParaColorizer: Realistic Image Colorization using Parallel Generative
Networks [1.7778609937758327]
Grayscale image colorization is a fascinating application of AI for information restoration.
We present a parallel GAN-based colorization framework.
We show the shortcomings of the non-perceptual evaluation metrics commonly used to assess multi-modal problems.
arXiv Detail & Related papers (2022-08-17T13:49:44Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Instance-aware Image Colorization [51.12040118366072]
In this paper, we propose a method for achieving instance-aware colorization.
Our network architecture leverages an off-the-shelf object detector to obtain cropped object images.
We use a similar network to extract the full-image features and apply a fusion module to predict the final colors.
arXiv Detail & Related papers (2020-05-21T17:59:23Z) - Learning to Structure an Image with Few Colors [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure the images from the classification loss in an end-to-end manner.
With only a 1-bit color space (i.e., two colors), the proposed network achieves 82.1% top-1 accuracy on the CIFAR10 dataset.
For applications, when encoded with PNG, the proposed color quantization shows superiority over other image compression methods in the extremely low bit-rate regime.
arXiv Detail & Related papers (2020-03-17T17:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.