FairPrune: Achieving Fairness Through Pruning for Dermatological Disease
Diagnosis
- URL: http://arxiv.org/abs/2203.02110v1
- Date: Fri, 4 Mar 2022 02:57:34 GMT
- Title: FairPrune: Achieving Fairness Through Pruning for Dermatological Disease
Diagnosis
- Authors: Yawen Wu, Dewen Zeng, Xiaowei Xu, Yiyu Shi, Jingtong Hu
- Abstract summary: We propose a method, FairPrune, that achieves fairness by pruning.
We show that our method can greatly improve fairness while keeping the average accuracy of both groups as high as possible.
- Score: 17.508632873527525
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many works have shown that deep learning-based medical image classification
models can exhibit bias toward certain demographic attributes like race,
gender, and age. Existing bias mitigation methods primarily focus on learning
debiased models, which may not necessarily guarantee all sensitive information
can be removed and usually comes with considerable accuracy degradation on both
privileged and unprivileged groups. To tackle this issue, we propose a method,
FairPrune, that achieves fairness by pruning. Conventionally, pruning is used
to reduce the model size for efficient inference. However, we show that pruning
can also be a powerful tool to achieve fairness. Our observation is that during
pruning, each parameter in the model has different importance for different
groups' accuracy. By pruning the parameters based on this importance
difference, we can reduce the accuracy difference between the privileged group
and the unprivileged group to improve fairness without a large accuracy drop.
To this end, we use the second derivative of the parameters of a pre-trained
model to quantify the importance of each parameter with respect to the model
accuracy for each group. Experiments on two skin lesion diagnosis datasets over
multiple sensitive attributes demonstrate that our method can greatly improve
fairness while keeping the average accuracy of both groups as high as possible.
Related papers
- Achieving Fairness Through Channel Pruning for Dermatological Disease Diagnosis [18.587384389499768]
We propose an innovative and adaptable Soft Nearest Neighbor Loss-based channel pruning framework.
Our work demonstrates that pruning can also be a potent tool for achieving fairness.
Experiments conducted on two skin lesion diagnosis datasets validate the effectiveness of our method.
arXiv Detail & Related papers (2024-05-14T15:04:46Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in
Medical Image Analysis [15.166588667072888]
Training models with robust group fairness properties is crucial in ethically sensitive application areas such as medical diagnosis.
High-capacity deep learning models can fit all training data nearly perfectly, and thus also exhibit perfect fairness during training.
We propose FairTune, a framework to optimise the choice of PEFT parameters with respect to fairness.
arXiv Detail & Related papers (2023-10-08T07:41:15Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Importance Tempering: Group Robustness for Overparameterized Models [12.559727665706687]
We propose importance tempering to improve the decision boundary.
We prove that properly selected temperatures can extricate the minority collapse for imbalanced classification.
Empirically, we achieve state-of-the-art results on worst group classification tasks using importance tempering.
arXiv Detail & Related papers (2022-09-19T03:41:30Z) - Improving the Fairness of Chest X-ray Classifiers [19.908277166053185]
We ask whether striving to achieve zero disparities in predictive performance (i.e. group fairness) is the appropriate fairness definition in the clinical setting.
We find, consistent with prior work on non-clinical data, that methods which strive to achieve better worst-group performance do not outperform simple data balancing.
arXiv Detail & Related papers (2022-03-23T17:56:58Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Selective Classification Can Magnify Disparities Across Groups [89.14499988774985]
We find that while selective classification can improve average accuracies, it can simultaneously magnify existing accuracy disparities.
Increasing abstentions can even decrease accuracies on some groups.
We train distributionally-robust models that achieve similar full-coverage accuracies across groups and show that selective classification uniformly improves each group.
arXiv Detail & Related papers (2020-10-27T08:51:30Z) - An Investigation of Why Overparameterization Exacerbates Spurious
Correlations [98.3066727301239]
We identify two key properties of the training data that drive this behavior.
We show how the inductive bias of models towards "memorizing" fewer examples can cause over parameterization to hurt.
arXiv Detail & Related papers (2020-05-09T01:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.