FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in
Medical Image Analysis
- URL: http://arxiv.org/abs/2310.05055v3
- Date: Wed, 17 Jan 2024 14:59:30 GMT
- Title: FairTune: Optimizing Parameter Efficient Fine Tuning for Fairness in
Medical Image Analysis
- Authors: Raman Dutt, Ondrej Bohdal, Sotirios A. Tsaftaris, Timothy Hospedales
- Abstract summary: Training models with robust group fairness properties is crucial in ethically sensitive application areas such as medical diagnosis.
High-capacity deep learning models can fit all training data nearly perfectly, and thus also exhibit perfect fairness during training.
We propose FairTune, a framework to optimise the choice of PEFT parameters with respect to fairness.
- Score: 15.166588667072888
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training models with robust group fairness properties is crucial in ethically
sensitive application areas such as medical diagnosis. Despite the growing body
of work aiming to minimise demographic bias in AI, this problem remains
challenging. A key reason for this challenge is the fairness generalisation
gap: High-capacity deep learning models can fit all training data nearly
perfectly, and thus also exhibit perfect fairness during training. In this
case, bias emerges only during testing when generalisation performance differs
across subgroups. This motivates us to take a bi-level optimisation perspective
on fair learning: Optimising the learning strategy based on validation
fairness. Specifically, we consider the highly effective workflow of adapting
pre-trained models to downstream medical imaging tasks using
parameter-efficient fine-tuning (PEFT) techniques. There is a trade-off between
updating more parameters, enabling a better fit to the task of interest vs.
fewer parameters, potentially reducing the generalisation gap. To manage this
tradeoff, we propose FairTune, a framework to optimise the choice of PEFT
parameters with respect to fairness. We demonstrate empirically that FairTune
leads to improved fairness on a range of medical imaging datasets. The code is
available at https://github.com/Raman1121/FairTune
Related papers
- Towards Harmless Rawlsian Fairness Regardless of Demographic Prior [57.30787578956235]
We explore the potential for achieving fairness without compromising its utility when no prior demographics are provided to the training set.
We propose a simple but effective method named VFair to minimize the variance of training losses inside the optimal set of empirical losses.
arXiv Detail & Related papers (2024-11-04T12:40:34Z) - Improving Fairness using Vision-Language Driven Image Augmentation [60.428157003498995]
Fairness is crucial when training a deep-learning discriminative model, especially in the facial domain.
Models tend to correlate specific characteristics (such as age and skin color) with unrelated attributes (downstream tasks)
This paper proposes a method to mitigate these correlations to improve fairness.
arXiv Detail & Related papers (2023-11-02T19:51:10Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Improving Recommendation Fairness via Data Augmentation [66.4071365614835]
Collaborative filtering based recommendation learns users' preferences from all users' historical behavior data, and has been popular to facilitate decision making.
A recommender system is considered unfair when it does not perform equally well for different user groups according to users' sensitive attributes.
In this paper, we study how to improve recommendation fairness from the data augmentation perspective.
arXiv Detail & Related papers (2023-02-13T13:11:46Z) - Fairness via In-Processing in the Over-parameterized Regime: A
Cautionary Tale [15.966815398160742]
MinDiff is a fairness-constrained training training procedure that aims to achieve Equality of Opportunity.
We show that although MinDiff improves fairness for under-constrained models, it is likely to be ineffective in the over-constrained regime.
We suggest using previously proposed regularization techniques, L2, early stopping and flooding, in conjunction with MinDiff to train fair over-constrainedized models.
arXiv Detail & Related papers (2022-06-29T18:40:35Z) - FairPrune: Achieving Fairness Through Pruning for Dermatological Disease
Diagnosis [17.508632873527525]
We propose a method, FairPrune, that achieves fairness by pruning.
We show that our method can greatly improve fairness while keeping the average accuracy of both groups as high as possible.
arXiv Detail & Related papers (2022-03-04T02:57:34Z) - FairBatch: Batch Selection for Model Fairness [28.94276265328868]
Existing techniques for improving model fairness require broad changes in either data preprocessing or model training.
We address this problem via the lens of bilevel optimization.
Our batch selection algorithm, which we call FairBatch, implements this optimization and supports prominent fairness measures.
arXiv Detail & Related papers (2020-12-03T04:36:04Z) - A Bandit-Based Algorithm for Fairness-Aware Hyperparameter Optimization [5.337302350000984]
We present Fairband, a bandit-based fairness-aware hyper parameter optimization (HO) algorithm.
By introducing fairness notions into HO, we enable seamless and efficient integration of fairness objectives into real-world ML pipelines.
We show that Fairband can efficiently navigate the fairness-accuracy trade-off through hyper parameter optimization.
arXiv Detail & Related papers (2020-10-07T21:35:16Z) - Fairness Constraints in Semi-supervised Learning [56.48626493765908]
We develop a framework for fair semi-supervised learning, which is formulated as an optimization problem.
We theoretically analyze the source of discrimination in semi-supervised learning via bias, variance and noise decomposition.
Our method is able to achieve fair semi-supervised learning, and reach a better trade-off between accuracy and fairness than fair supervised learning.
arXiv Detail & Related papers (2020-09-14T04:25:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.