Exploring Color Invariance through Image-Level Ensemble Learning
- URL: http://arxiv.org/abs/2401.10512v1
- Date: Fri, 19 Jan 2024 06:04:48 GMT
- Title: Exploring Color Invariance through Image-Level Ensemble Learning
- Authors: Yunpeng Gong and Jiaquan Li and Lifei Chen and Min Jiang
- Abstract summary: This study introduces a learning strategy named Random Color Erasing.
It selectively erases partial or complete color information in the training data without disrupting the original image structure.
This approach mitigates the risk of overfitting and enhances the model's ability to handle color variation.
- Score: 7.254270666779331
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the field of computer vision, the persistent presence of color bias,
resulting from fluctuations in real-world lighting and camera conditions,
presents a substantial challenge to the robustness of models. This issue is
particularly pronounced in complex wide-area surveillance scenarios, such as
person re-identification and industrial dust segmentation, where models often
experience a decline in performance due to overfitting on color information
during training, given the presence of environmental variations. Consequently,
there is a need to effectively adapt models to cope with the complexities of
camera conditions. To address this challenge, this study introduces a learning
strategy named Random Color Erasing, which draws inspiration from ensemble
learning. This strategy selectively erases partial or complete color
information in the training data without disrupting the original image
structure, thereby achieving a balanced weighting of color features and other
features within the neural network. This approach mitigates the risk of
overfitting and enhances the model's ability to handle color variation, thereby
improving its overall robustness. The approach we propose serves as an ensemble
learning strategy, characterized by robust interpretability. A comprehensive
analysis of this methodology is presented in this paper. Across various tasks
such as person re-identification and semantic segmentation, our approach
consistently improves strong baseline methods. Notably, in comparison to
existing methods that prioritize color robustness, our strategy significantly
enhances performance in cross-domain scenarios. The code available at
\url{https://github.com/layumi/Person\_reID\_baseline\_pytorch/blob/master/random\_erasing.py}
or \url{https://github.com/finger-monkey/Data-Augmentation}.
Related papers
- Robust Network Learning via Inverse Scale Variational Sparsification [55.64935887249435]
We introduce an inverse scale variational sparsification framework within a time-continuous inverse scale space formulation.
Unlike frequency-based methods, our approach not only removes noise by smoothing small-scale features.
We show the efficacy of our approach through enhanced robustness against various noise types.
arXiv Detail & Related papers (2024-09-27T03:17:35Z) - Reinforcing Pre-trained Models Using Counterfactual Images [54.26310919385808]
This paper proposes a novel framework to reinforce classification models using language-guided generated counterfactual images.
We identify model weaknesses by testing the model using the counterfactual image dataset.
We employ the counterfactual images as an augmented dataset to fine-tune and reinforce the classification model.
arXiv Detail & Related papers (2024-06-19T08:07:14Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - Consistency Regularisation in Varying Contexts and Feature Perturbations
for Semi-Supervised Semantic Segmentation of Histology Images [14.005379068469361]
We present a consistency based semi-supervised learning (SSL) approach that can help mitigate this challenge.
SSL models might also be susceptible to changing context and features perturbations exhibiting poor generalisation due to the limited training data.
We show that cross-consistency training makes the encoder features invariant to different perturbations and improves the prediction confidence.
arXiv Detail & Related papers (2023-01-30T18:21:57Z) - ParaColorizer: Realistic Image Colorization using Parallel Generative
Networks [1.7778609937758327]
Grayscale image colorization is a fascinating application of AI for information restoration.
We present a parallel GAN-based colorization framework.
We show the shortcomings of the non-perceptual evaluation metrics commonly used to assess multi-modal problems.
arXiv Detail & Related papers (2022-08-17T13:49:44Z) - "Just Drive": Colour Bias Mitigation for Semantic Segmentation in the
Context of Urban Driving [8.147652597876862]
Convolutional neural networks have been shown to rely on colour and texture rather than geometry.
In this paper, we attempt to alleviate biases encountered by semantic segmentation models in urban driving scenes, via an iteratively trained unlearning algorithm.
arXiv Detail & Related papers (2021-12-02T10:56:19Z) - Digging Into Self-Supervised Learning of Feature Descriptors [14.47046413243358]
We propose a set of improvements that combined lead to powerful feature descriptors.
We show that increasing the search space from in-pair to in-batch for hard negative mining brings consistent improvement.
We demonstrate that a combination of synthetic homography transformation, color augmentation, and photorealistic image stylization produces useful representations.
arXiv Detail & Related papers (2021-10-10T12:22:44Z) - Style Curriculum Learning for Robust Medical Image Segmentation [62.02435329931057]
Deep segmentation models often degrade due to distribution shifts in image intensities between the training and test data sets.
We propose a novel framework to ensure robust segmentation in the presence of such distribution shifts.
arXiv Detail & Related papers (2021-08-01T08:56:24Z) - Collaboration among Image and Object Level Features for Image
Colourisation [25.60139324272782]
Image colourisation is an ill-posed problem, with multiple correct solutions which depend on the context and object instances present in the input datum.
Previous approaches attacked the problem either by requiring intense user interactions or by exploiting the ability of convolutional neural networks (CNNs) in learning image level (context) features.
We propose a single network, named UCapsNet, that separate image-level features obtained through convolutions and object-level features captured by means of capsules.
Then, by skip connections over different layers, we enforce collaboration between such disentangling factors to produce high quality and plausible image colourisation.
arXiv Detail & Related papers (2021-01-19T11:48:12Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z) - Learning perturbation sets for robust machine learning [97.6757418136662]
We use a conditional generator that defines the perturbation set over a constrained region of the latent space.
We measure the quality of our learned perturbation sets both quantitatively and qualitatively.
We leverage our learned perturbation sets to train models which are empirically and certifiably robust to adversarial image corruptions and adversarial lighting variations.
arXiv Detail & Related papers (2020-07-16T16:39:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.