"Just Drive": Colour Bias Mitigation for Semantic Segmentation in the
Context of Urban Driving
- URL: http://arxiv.org/abs/2112.01121v1
- Date: Thu, 2 Dec 2021 10:56:19 GMT
- Title: "Just Drive": Colour Bias Mitigation for Semantic Segmentation in the
Context of Urban Driving
- Authors: Jack Stelling and Amir Atapour-Abarghouei
- Abstract summary: Convolutional neural networks have been shown to rely on colour and texture rather than geometry.
In this paper, we attempt to alleviate biases encountered by semantic segmentation models in urban driving scenes, via an iteratively trained unlearning algorithm.
- Score: 8.147652597876862
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Biases can filter into AI technology without our knowledge. Oftentimes,
seminal deep learning networks champion increased accuracy above all else. In
this paper, we attempt to alleviate biases encountered by semantic segmentation
models in urban driving scenes, via an iteratively trained unlearning
algorithm. Convolutional neural networks have been shown to rely on colour and
texture rather than geometry. This raises issues when safety-critical
applications, such as self-driving cars, encounter images with covariate shift
at test time - induced by variations such as lighting changes or seasonality.
Conceptual proof of bias unlearning has been shown on simple datasets such as
MNIST. However, the strategy has never been applied to the safety-critical
domain of pixel-wise semantic segmentation of highly variable training data -
such as urban scenes. Trained models for both the baseline and bias unlearning
scheme have been tested for performance on colour-manipulated validation sets
showing a disparity of up to 85.50% in mIoU from the original RGB images -
confirming segmentation networks strongly depend on the colour information in
the training data to make their classification. The bias unlearning scheme
shows improvements of handling this covariate shift of up to 61% in the best
observed case - and performs consistently better at classifying the "human" and
"vehicle" classes compared to the baseline model.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Exploring Color Invariance through Image-Level Ensemble Learning [7.254270666779331]
This study introduces a learning strategy named Random Color Erasing.
It selectively erases partial or complete color information in the training data without disrupting the original image structure.
This approach mitigates the risk of overfitting and enhances the model's ability to handle color variation.
arXiv Detail & Related papers (2024-01-19T06:04:48Z) - Lidar Annotation Is All You Need [0.0]
This paper aims to improve the efficiency of image segmentation using a convolutional neural network in a multi-sensor setup.
The key innovation of our approach is the masked loss, addressing sparse ground-truth masks from point clouds.
Experimental validation of the approach on benchmark datasets shows comparable performance to a high-quality image segmentation model.
arXiv Detail & Related papers (2023-11-08T15:55:18Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - On the ability of CNNs to extract color invariant intensity based
features for image classification [4.297070083645049]
Convolutional neural networks (CNNs) have demonstrated remarkable success in vision-related tasks.
Recent studies suggest that CNNs exhibit a bias toward texture instead of object shape in image classification tasks.
This paper investigates the ability of CNNs to adapt to different color distributions in an image while maintaining context and background.
arXiv Detail & Related papers (2023-07-13T00:36:55Z) - Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Calibrating Class Activation Maps for Long-Tailed Visual Recognition [60.77124328049557]
We present two effective modifications of CNNs to improve network learning from long-tailed distribution.
First, we present a Class Activation Map (CAMC) module to improve the learning and prediction of network classifiers.
Second, we investigate the use of normalized classifiers for representation learning in long-tailed problems.
arXiv Detail & Related papers (2021-08-29T05:45:03Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - Increasing the Robustness of Semantic Segmentation Models with
Painting-by-Numbers [39.95214171175713]
We build upon an insight from image classification that output can be improved by increasing the network-bias towards object shapes.
Our basic idea is to alpha-blend a portion of the RGB training images with faked images, where each class-label is given a fixed, randomly chosen color.
We demonstrate the effectiveness of our training schema for DeepLabv3+ with various network backbones, MobileNet-V2, ResNets, and Xception, and evaluate it on the Cityscapes dataset.
arXiv Detail & Related papers (2020-10-12T07:42:39Z) - Data Augmentation and Clustering for Vehicle Make/Model Classification [0.0]
We present a way to exploit a training data set of vehicles released in different years and captured under different perspectives.
Also the efficacy of clustering to enhance the make/model classification is presented.
Deeper convolutional neural network based on ResNet architecture has been designed for the training of the vehicle make/model classification.
arXiv Detail & Related papers (2020-09-14T18:24:31Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.