Achieve Fairness without Demographics for Dermatological Disease
Diagnosis
- URL: http://arxiv.org/abs/2401.08066v1
- Date: Tue, 16 Jan 2024 02:49:52 GMT
- Title: Achieve Fairness without Demographics for Dermatological Disease
Diagnosis
- Authors: Ching-Hao Chiu, Yu-Jen Chen, Yawen Wu, Yiyu Shi, Tsung-Yi Ho
- Abstract summary: We propose a method enabling fair predictions for sensitive attributes during the testing phase without using such information during training.
Inspired by prior work highlighting the impact of feature entanglement on fairness, we enhance the model features by capturing the features related to the sensitive and target attributes.
This ensures that the model can only classify based on the features related to the target attribute without relying on features associated with sensitive attributes.
- Score: 17.792332189055223
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In medical image diagnosis, fairness has become increasingly crucial. Without
bias mitigation, deploying unfair AI would harm the interests of the
underprivileged population and potentially tear society apart. Recent research
addresses prediction biases in deep learning models concerning demographic
groups (e.g., gender, age, and race) by utilizing demographic (sensitive
attribute) information during training. However, many sensitive attributes
naturally exist in dermatological disease images. If the trained model only
targets fairness for a specific attribute, it remains unfair for other
attributes. Moreover, training a model that can accommodate multiple sensitive
attributes is impractical due to privacy concerns. To overcome this, we propose
a method enabling fair predictions for sensitive attributes during the testing
phase without using such information during training. Inspired by prior work
highlighting the impact of feature entanglement on fairness, we enhance the
model features by capturing the features related to the sensitive and target
attributes and regularizing the feature entanglement between corresponding
classes. This ensures that the model can only classify based on the features
related to the target attribute without relying on features associated with
sensitive attributes, thereby improving fairness and accuracy. Additionally, we
use disease masks from the Segment Anything Model (SAM) to enhance the quality
of the learned feature. Experimental results demonstrate that the proposed
method can improve fairness in classification compared to state-of-the-art
methods in two dermatological disease datasets.
Related papers
- High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning [54.86882315023791]
We propose an innovative approach called High-Discriminative Attribute Feature Learning for Generalized Zero-Shot Learning (HDAFL)
HDAFL utilizes multiple convolutional kernels to automatically learn discriminative regions highly correlated with attributes in images.
We also introduce a Transformer-based attribute discrimination encoder to enhance the discriminative capability among attributes.
arXiv Detail & Related papers (2024-04-07T13:17:47Z) - Leveraging vision-language models for fair facial attribute classification [19.93324644519412]
General-purpose vision-language model (VLM) is a rich knowledge source for common sensitive attributes.
We analyze the correspondence between VLM predicted and human defined sensitive attribute distribution.
Experiments on multiple benchmark facial attribute classification datasets show fairness gains of the model over existing unsupervised baselines.
arXiv Detail & Related papers (2024-03-15T18:37:15Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Fairness Under Demographic Scarce Regime [7.523105080786704]
We propose a framework to build attribute classifiers that achieve better fairness-accuracy tradeoffs.
We show that enforcing fairness constraints on samples with uncertain sensitive attributes can negatively impact the fairness-accuracy tradeoff.
Our framework can outperform models trained with fairness constraints on the true sensitive attributes in most benchmarks.
arXiv Detail & Related papers (2023-07-24T19:07:34Z) - Towards Assumption-free Bias Mitigation [47.5131072745805]
We propose an assumption-free framework to detect the related attributes automatically by modeling feature interaction for bias mitigation.
Experimental results on four real-world datasets demonstrate that our proposed framework can significantly alleviate unfair prediction behaviors.
arXiv Detail & Related papers (2023-07-09T05:55:25Z) - Semi-FairVAE: Semi-supervised Fair Representation Learning with
Adversarial Variational Autoencoder [92.67156911466397]
We propose a semi-supervised fair representation learning approach based on adversarial variational autoencoder.
We use a bias-aware model to capture inherent bias information on sensitive attribute.
We also use a bias-free model to learn debiased fair representations by using adversarial learning to remove bias information from them.
arXiv Detail & Related papers (2022-04-01T15:57:47Z) - Learning Fair Models without Sensitive Attributes: A Generative Approach [33.196044483534784]
We study a novel problem of learning fair models without sensitive attributes by exploring relevant features.
We propose a probabilistic generative framework to effectively estimate the sensitive attribute from the training data.
Experimental results on real-world datasets show the effectiveness of our framework.
arXiv Detail & Related papers (2022-03-30T15:54:30Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - You Can Still Achieve Fairness Without Sensitive Attributes: Exploring
Biases in Non-Sensitive Features [29.94644351343916]
We propose a novel framework which simultaneously uses these related features for accurate prediction and regularizing the model to be fair.
Experimental results on real-world datasets demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2021-04-29T17:52:11Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.