Generation Of Colors using Bidirectional Long Short Term Memory Networks
- URL: http://arxiv.org/abs/2311.06542v3
- Date: Sun, 31 Dec 2023 15:21:23 GMT
- Title: Generation Of Colors using Bidirectional Long Short Term Memory Networks
- Authors: A. Sinha
- Abstract summary: Human vision can distinguish between a vast spectrum of colours, estimated to be between 2 to 7 million discernible shades.
This research endeavors to bridge the gap between our visual perception of countless shades and our ability to articulate and name them accurately.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human vision can distinguish between a vast spectrum of colours, estimated to
be between 2 to 7 million discernible shades. However, this impressive range
does not inherently imply that all these colours have been precisely named and
described within our lexicon. We often associate colours with familiar objects
and concepts in our daily lives. This research endeavors to bridge the gap
between our visual perception of countless shades and our ability to articulate
and name them accurately. A novel model has been developed to achieve this
goal, leveraging Bidirectional Long Short-Term Memory (BiLSTM) networks with
Active learning. This model operates on a proprietary dataset meticulously
curated for this study. The primary objective of this research is to create a
versatile tool for categorizing and naming previously unnamed colours or
identifying intermediate shades that elude traditional colour terminology. The
findings underscore the potential of this innovative approach in
revolutionizing our understanding of colour perception and language. Through
rigorous experimentation and analysis, this study illuminates a promising
avenue for Natural Language Processing (NLP) applications in diverse
industries. By facilitating the exploration of the vast colour spectrum the
potential applications of NLP are extended beyond conventional boundaries.
Related papers
- Color Names in Vision-Language Models [48.847573209643265]
We present the first systematic evaluation of color naming capabilities across vision-language models (VLMs)<n>Our results show that while VLMs achieve high accuracy on colors from classical studies, performance drops significantly on expanded, non-prototypical color sets.<n>We identify 21 common color terms that consistently emerge across all models, revealing two distinct approaches.
arXiv Detail & Related papers (2025-09-26T16:04:18Z) - Color as the Impetus: Transforming Few-Shot Learner [18.73626982790281]
Humans possess innate meta-learning capabilities, partly attributable to their exceptional color perception.<n>We propose the ColorSense Learner, a bio-inspired meta-learning framework that capitalizes on inter-channel feature extraction and interactive learning.
arXiv Detail & Related papers (2025-07-29T18:09:16Z) - Color in Visual-Language Models: CLIP deficiencies [1.0159205678719043]
This work explores how color is encoded in CLIP (Contrastive Language-Image Pre-training) which is currently the most influential VML (Visual Language model) in Artificial Intelligence.
We come across two main deficiencies: (a) a clear bias on achromatic stimuli that are poorly related to the color concept, and (b) the tendency to prioritize text over other visual information.
arXiv Detail & Related papers (2025-02-06T19:38:12Z) - ColorPeel: Color Prompt Learning with Diffusion Models via Color and Shape Disentanglement [20.45850285936787]
We propose to learn specific color prompts tailored to user-selected colors.
Our method, denoted as ColorPeel, successfully assists the T2I models to peel off the novel color prompts.
Our findings represent a significant step towards improving precision and versatility of T2I models.
arXiv Detail & Related papers (2024-07-09T19:26:34Z) - Perceptual Structure in the Absence of Grounding for LLMs: The Impact of
Abstractedness and Subjectivity in Color Language [2.6094835036012864]
We show that there is considerable alignment between a defined color space and the feature space defined by a language model.
Our results show that while color space alignment holds for monolexemic, highly pragmatic color descriptions, this alignment drops considerably in the presence of examples that exhibit elements of real linguistic usage.
arXiv Detail & Related papers (2023-11-22T02:12:36Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - Exploration of the Usage of Color Terms by Color-blind Participants in
Online Discussion Platforms [4.445130093341008]
We show that red-green color-blind speakers use the "red" and "green" color terms in less predictable contexts.
These findings shed some new and interesting light on the role of sensory experience on our linguistic system.
arXiv Detail & Related papers (2022-10-21T12:11:10Z) - LAB-Net: LAB Color-Space Oriented Lightweight Network for Shadow Removal [82.15476792337529]
We present a novel lightweight deep neural network that processes shadow images in the LAB color space.
The proposed network termed "LAB-Net", is motivated by the following three observations.
Experimental results show that our LAB-Net well outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-08-27T15:34:15Z) - Can Language Models Encode Perceptual Structure Without Grounding? A
Case Study in Color [18.573415435334105]
We employ a dataset of monolexemic color terms and color chips represented in CIELAB, a color space with a perceptually meaningful distance metric.
Using two methods of evaluating the structural alignment of colors in this space with text-derived color term representations, we find significant correspondence.
We find that warmer colors are, on average, better aligned to the perceptual color space than cooler ones.
arXiv Detail & Related papers (2021-09-13T17:09:40Z) - Towards Vivid and Diverse Image Colorization with Generative Color Prior [17.087464490162073]
Recent deep-learning-based methods could automatically colorize images at a low cost.
We aim at recovering vivid colors by leveraging the rich and diverse color priors encapsulated in a pretrained Generative Adversarial Networks (GAN)
Thanks to the powerful generative color prior and delicate designs, our method could produce vivid colors with a single forward pass.
arXiv Detail & Related papers (2021-08-19T17:49:21Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - CoRe: Color Regression for Multicolor Fashion Garments [80.57724826629176]
In this paper, we handle color detection as a regression problem to predict the exact RGB values.
We include a second regression stage for refinement in our newly proposed architecture.
This architecture is modular and easily expanded to detect the RGBs of all colors in a multicolor garment.
arXiv Detail & Related papers (2020-10-06T16:12:30Z) - Semantic-driven Colorization [78.88814849391352]
Recent colorization works implicitly predict the semantic information while learning to colorize black-and-white images.
In this study, we simulate that human-like action to let our network first learn to understand the photo, then colorize it.
arXiv Detail & Related papers (2020-06-13T08:13:30Z) - Disentangled Non-Local Neural Networks [68.92293183542131]
We study the non-local block in depth, where we find that its attention can be split into two terms.
We present the disentangled non-local block, where the two terms are decoupled to facilitate learning for both terms.
arXiv Detail & Related papers (2020-06-11T17:59:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.