Gone With the Bits: Revealing Racial Bias in Low-Rate Neural Compression for Facial Images
- URL: http://arxiv.org/abs/2505.02949v1
- Date: Mon, 05 May 2025 18:27:11 GMT
- Title: Gone With the Bits: Revealing Racial Bias in Low-Rate Neural Compression for Facial Images
- Authors: Tian Qiu, Arjun Nichani, Rasta Tadayontahmasebi, Haewon Jeong,
- Abstract summary: We present a general, structured, scalable framework for evaluating bias in neural image compression models.<n>We show that traditional distortion metrics are ineffective in capturing bias in neural compression models.<n>We additionally show the bias can be attributed to compression model bias and classification model bias.
- Score: 5.952011729053457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural compression methods are gaining popularity due to their superior rate-distortion performance over traditional methods, even at extremely low bitrates below 0.1 bpp. As deep learning architectures, these models are prone to bias during the training process, potentially leading to unfair outcomes for individuals in different groups. In this paper, we present a general, structured, scalable framework for evaluating bias in neural image compression models. Using this framework, we investigate racial bias in neural compression algorithms by analyzing nine popular models and their variants. Through this investigation, we first demonstrate that traditional distortion metrics are ineffective in capturing bias in neural compression models. Next, we highlight that racial bias is present in all neural compression models and can be captured by examining facial phenotype degradation in image reconstructions. We then examine the relationship between bias and realism in the decoded images and demonstrate a trade-off across models. Finally, we show that utilizing a racially balanced training set can reduce bias but is not a sufficient bias mitigation strategy. We additionally show the bias can be attributed to compression model bias and classification model bias. We believe that this work is a first step towards evaluating and eliminating bias in neural image compression models.
Related papers
- Understanding-informed Bias Mitigation for Fair CMR Segmentation [7.170614530699774]
We aim to investigate the impact of common bias mitigation methods to address bias between Black and White subjects in AI-based CMR segmentation models.<n>Specifically, we use oversampling, importance reweighing and Group DRO as well as combinations of these techniques to mitigate the ethnicity bias.<n>We find that bias can be mitigated using oversampling, significantly improving performance for the underrepresented Black subjects whilst not significantly reducing the majority White subjects' performance.
arXiv Detail & Related papers (2025-03-21T12:17:43Z) - Bias in Pruned Vision Models: In-Depth Analysis and Countermeasures [93.17009514112702]
Pruning, setting a significant subset of the parameters of a neural network to zero, is one of the most popular methods of model compression.
Despite existing evidence for this phenomenon, the relationship between neural network pruning and induced bias is not well-understood.
arXiv Detail & Related papers (2023-04-25T07:42:06Z) - Spuriosity Rankings: Sorting Data to Measure and Mitigate Biases [62.54519787811138]
We present a simple but effective method to measure and mitigate model biases caused by reliance on spurious cues.
We rank images within their classes based on spuriosity, proxied via deep neural features of an interpretable network.
Our results suggest that model bias due to spurious feature reliance is influenced far more by what the model is trained on than how it is trained.
arXiv Detail & Related papers (2022-12-05T23:15:43Z) - Rethinking Bias Mitigation: Fairer Architectures Make for Fairer Face
Recognition [107.58227666024791]
Face recognition systems are widely deployed in safety-critical applications, including law enforcement.
They exhibit bias across a range of socio-demographic dimensions, such as gender and race.
Previous works on bias mitigation largely focused on pre-processing the training data.
arXiv Detail & Related papers (2022-10-18T15:46:05Z) - IFBiD: Inference-Free Bias Detection [13.492626767817017]
This paper is the first to explore an automatic way to detect bias in deep convolutional neural networks by simply looking at their weights.
We analyze how bias is encoded in the weights of deep networks through a toy example using the Colored MNIST database.
arXiv Detail & Related papers (2021-09-09T16:01:31Z) - BiaSwap: Removing dataset bias with bias-tailored swapping augmentation [20.149645246997668]
Deep neural networks often make decisions based on the spurious correlations inherent in the dataset, failing to generalize in an unbiased data distribution.
This paper proposes a novel bias-tailored augmentation-based approach, BiaSwap, for learning debiased representation without requiring supervision on the bias type.
arXiv Detail & Related papers (2021-08-23T08:35:26Z) - Unravelling the Effect of Image Distortions for Biased Prediction of
Pre-trained Face Recognition Models [86.79402670904338]
We evaluate the performance of four state-of-the-art deep face recognition models in the presence of image distortions.
We have observed that image distortions have a relationship with the performance gap of the model across different subgroups.
arXiv Detail & Related papers (2021-08-14T16:49:05Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - InsideBias: Measuring Bias in Deep Networks and Application to Face
Gender Biometrics [73.85525896663371]
This work explores the biases in learning processes based on deep neural network architectures.
We employ two gender detection models based on popular deep neural networks.
We propose InsideBias, a novel method to detect biased models.
arXiv Detail & Related papers (2020-04-14T15:20:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.