A systematic study of race and sex bias in CNN-based cardiac MR
segmentation
- URL: http://arxiv.org/abs/2209.01627v1
- Date: Sun, 4 Sep 2022 14:32:00 GMT
- Title: A systematic study of race and sex bias in CNN-based cardiac MR
segmentation
- Authors: Tiarna Lee, Esther Puyol-Anton, Bram Ruijsink, Miaojing Shi, and
Andrew P. King
- Abstract summary: We present the first systematic study of the impact of training set imbalance on race and sex bias in CNN-based segmentation.
We focus on segmentation of the structures of the heart from short axis cine cardiac magnetic resonance images, and train multiple CNN segmentation models with different levels of race/sex imbalance.
We find no significant bias in the sex experiment but significant bias in two separate race experiments, highlighting the need to consider adequate representation of different demographic groups in health datasets.
- Score: 6.507372382471608
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In computer vision there has been significant research interest in assessing
potential demographic bias in deep learning models. One of the main causes of
such bias is imbalance in the training data. In medical imaging, where the
potential impact of bias is arguably much greater, there has been less
interest. In medical imaging pipelines, segmentation of structures of interest
plays an important role in estimating clinical biomarkers that are subsequently
used to inform patient management. Convolutional neural networks (CNNs) are
starting to be used to automate this process. We present the first systematic
study of the impact of training set imbalance on race and sex bias in CNN-based
segmentation. We focus on segmentation of the structures of the heart from
short axis cine cardiac magnetic resonance images, and train multiple CNN
segmentation models with different levels of race/sex imbalance. We find no
significant bias in the sex experiment but significant bias in two separate
race experiments, highlighting the need to consider adequate representation of
different demographic groups in health datasets.
Related papers
- Dataset Distribution Impacts Model Fairness: Single vs. Multi-Task Learning [2.9530211066840417]
We evaluate the performance of skin lesion classification using ResNet-based CNNs.
We present a linear programming method for generating datasets with varying patient sex and class labels.
arXiv Detail & Related papers (2024-07-24T15:23:26Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Studying the Effects of Sex-related Differences on Brain Age Prediction
using brain MR Imaging [0.3958317527488534]
We study biases related to sex when developing a machine learning model based on brain magnetic resonance images (MRI)
We investigate the effects of sex by performing brain age prediction considering different experimental designs.
We found disparities in the performance of brain age prediction models when trained on distinct sex subgroups and datasets.
arXiv Detail & Related papers (2023-10-17T20:55:53Z) - An investigation into the impact of deep learning model choice on sex
and race bias in cardiac MR segmentation [8.449342469976758]
We investigate how imbalances in subject sex and race affect AI-based cine cardiac magnetic resonance image segmentation.
We find significant sex bias in three of the four models and racial bias in all of the models.
arXiv Detail & Related papers (2023-08-25T14:55:38Z) - A Study of Demographic Bias in CNN-based Brain MR Segmentation [43.55994393060723]
CNN models for brain MR segmentation have the potential to contain sex or race bias when trained with imbalanced training sets.
We train multiple instances of the FastSurferCNN model using different levels of sex imbalance in white subjects.
We find significant sex and race bias effects in segmentation model performance.
arXiv Detail & Related papers (2022-08-13T10:07:54Z) - Pseudo Bias-Balanced Learning for Debiased Chest X-ray Classification [57.53567756716656]
We study the problem of developing debiased chest X-ray diagnosis models without knowing exactly the bias labels.
We propose a novel algorithm, pseudo bias-balanced learning, which first captures and predicts per-sample bias labels.
Our proposed method achieved consistent improvements over other state-of-the-art approaches.
arXiv Detail & Related papers (2022-03-18T11:02:18Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Do Neural Ranking Models Intensify Gender Bias? [13.37092521347171]
We first provide a bias measurement framework which includes two metrics to quantify the degree of the unbalanced presence of gender-related concepts in a given IR model's ranking list.
Applying these queries to the MS MARCO Passage retrieval collection, we then measure the gender bias of a BM25 model and several recent neural ranking models.
Results show that while all models are strongly biased toward male, the neural models, and in particular the ones based on contextualized embedding models, significantly intensify gender bias.
arXiv Detail & Related papers (2020-05-01T13:31:11Z) - Causal Mediation Analysis for Interpreting Neural NLP: The Case of
Gender Bias [45.956112337250275]
We propose a methodology grounded in the theory of causal mediation analysis for interpreting which parts of a model are causally implicated in its behavior.
We apply this methodology to analyze gender bias in pre-trained Transformer language models.
Our mediation analysis reveals that gender bias effects are (i) sparse, concentrated in a small part of the network; (ii) synergistic, amplified or repressed by different components; and (iii) decomposable into effects flowing directly from the input and indirectly through the mediators.
arXiv Detail & Related papers (2020-04-26T01:53:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.