Color as the Impetus: Transforming Few-Shot Learner
- URL: http://arxiv.org/abs/2507.22136v2
- Date: Thu, 31 Jul 2025 13:03:54 GMT
- Title: Color as the Impetus: Transforming Few-Shot Learner
- Authors: Chaofei Qi, Zhitai Liu, Jianbin Qiu,
- Abstract summary: Humans possess innate meta-learning capabilities, partly attributable to their exceptional color perception.<n>We propose the ColorSense Learner, a bio-inspired meta-learning framework that capitalizes on inter-channel feature extraction and interactive learning.
- Score: 18.73626982790281
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans possess innate meta-learning capabilities, partly attributable to their exceptional color perception. In this paper, we pioneer an innovative viewpoint on few-shot learning by simulating human color perception mechanisms. We propose the ColorSense Learner, a bio-inspired meta-learning framework that capitalizes on inter-channel feature extraction and interactive learning. By strategically emphasizing distinct color information across different channels, our approach effectively filters irrelevant features while capturing discriminative characteristics. Color information represents the most intuitive visual feature, yet conventional meta-learning methods have predominantly neglected this aspect, focusing instead on abstract feature differentiation across categories. Our framework bridges the gap via synergistic color-channel interactions, enabling better intra-class commonality extraction and larger inter-class differences. Furthermore, we introduce a meta-distiller based on knowledge distillation, ColorSense Distiller, which incorporates prior teacher knowledge to augment the student network's meta-learning capacity. We've conducted comprehensive coarse/fine-grained and cross-domain experiments on eleven few-shot benchmarks for validation. Numerous experiments reveal that our methods have extremely strong generalization ability, robustness, and transferability, and effortless handle few-shot classification from the perspective of color perception.
Related papers
- Teaching Humans Subtle Differences with DIFFusion [36.30462318766868]
We present a system that leverages generative models to automatically discover and visualize minimal discriminative features between categories.<n>Our method generates counterfactual visualizations with subtle, targeted transformations between classes, performing well even in domains where data is sparse.
arXiv Detail & Related papers (2025-04-10T18:04:22Z) - Color in Visual-Language Models: CLIP deficiencies [1.0159205678719043]
This work explores how color is encoded in CLIP (Contrastive Language-Image Pre-training) which is currently the most influential VML (Visual Language model) in Artificial Intelligence.<n>We come across two main deficiencies: (a) a clear bias on achromatic stimuli that are poorly related to the color concept, and (b) the tendency to prioritize text over other visual information.
arXiv Detail & Related papers (2025-02-06T19:38:12Z) - High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning [54.86882315023791]
We propose an innovative approach called High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning (HDAFL)
HDAFL utilizes multiple convolutional kernels to automatically learn discriminative regions highly correlated with attributes in images.
We also introduce a Transformer-based attribute discrimination encoder to enhance the discriminative capability among attributes.
arXiv Detail & Related papers (2024-04-07T13:17:47Z) - Visual Commonsense based Heterogeneous Graph Contrastive Learning [79.22206720896664]
We propose a heterogeneous graph contrastive learning method to better finish the visual reasoning task.
Our method is designed as a plug-and-play way, so that it can be quickly and easily combined with a wide range of representative methods.
arXiv Detail & Related papers (2023-11-11T12:01:18Z) - The Influences of Color and Shape Features in Visual Contrastive
Learning [0.0]
This paper investigates the influences of individual image features (e.g., color and shape) to model performance remain ambiguous.
Experimental results show that compared with supervised representations, contrastive representations tend to cluster with objects of similar color.
arXiv Detail & Related papers (2023-01-29T15:10:14Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.<n>We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.<n>Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - Name Your Colour For the Task: Artificially Discover Colour Naming via
Colour Quantisation Transformer [62.75343115345667]
We propose a novel colour quantisation transformer, CQFormer, that quantises colour space while maintaining machine recognition on the quantised images.
We observe the consistent evolution pattern between our artificial colour system and basic colour terms across human languages.
Our colour quantisation method also offers an efficient quantisation method that effectively compresses the image storage.
arXiv Detail & Related papers (2022-12-07T03:39:18Z) - A domain adaptive deep learning solution for scanpath prediction of
paintings [66.46953851227454]
This paper focuses on the eye-movement analysis of viewers during the visual experience of a certain number of paintings.
We introduce a new approach to predicting human visual attention, which impacts several cognitive functions for humans.
The proposed new architecture ingests images and returns scanpaths, a sequence of points featuring a high likelihood of catching viewers' attention.
arXiv Detail & Related papers (2022-09-22T22:27:08Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - Intriguing Properties of Contrastive Losses [12.953112189125411]
We study three intriguing properties of contrastive learning.
We study if instance-based contrastive learning can learn well on images with multiple objects present.
We show that, for contrastive learning, a few bits of easy-to-learn shared features can suppress, and even fully prevent, the learning of other sets of competing features.
arXiv Detail & Related papers (2020-11-05T13:19:48Z) - Understanding Brain Dynamics for Color Perception using Wearable EEG
headband [0.46335240643629344]
We have designed a multiclass classification model to detect the primary colors from the features of raw EEG signals.
Our method employs spectral power features, statistical features as well as correlation features from the signal band power obtained from continuous Morlet wavelet transform.
Our proposed methodology gave the best overall accuracy of 80.6% for intra-subject classification.
arXiv Detail & Related papers (2020-08-17T05:25:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.