Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine
Learning in Healthcare
- URL: http://arxiv.org/abs/2403.05235v1
- Date: Fri, 8 Mar 2024 11:51:00 GMT
- Title: Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine
Learning in Healthcare
- Authors: Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty,
Marcus Eng Hock Ong, Roger Vaughan, Nan Liu
- Abstract summary: We propose Fairness-Aware Interpretable Modeling (FAIM) to improve model fairness without compromising performance.
FAIM features an interactive interface to identify a "fairer" model from a set of high-performing models.
We show FAIM models not only exhibited satisfactory discriminatory performance but also significantly mitigated biases as measured by well-established fairness metrics.
- Score: 6.608905791768002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The escalating integration of machine learning in high-stakes fields such as
healthcare raises substantial concerns about model fairness. We propose an
interpretable framework - Fairness-Aware Interpretable Modeling (FAIM), to
improve model fairness without compromising performance, featuring an
interactive interface to identify a "fairer" model from a set of
high-performing models and promoting the integration of data-driven evidence
and clinical expertise to enhance contextualized fairness. We demonstrated
FAIM's value in reducing sex and race biases by predicting hospital admission
with two real-world databases, MIMIC-IV-ED and SGH-ED. We show that for both
datasets, FAIM models not only exhibited satisfactory discriminatory
performance but also significantly mitigated biases as measured by
well-established fairness metrics, outperforming commonly used bias-mitigation
methods. Our approach demonstrates the feasibility of improving fairness
without sacrificing performance and provides an a modeling mode that invites
domain experts to engage, fostering a multidisciplinary effort toward tailored
AI fairness.
Related papers
- Enhancing Fairness in Neural Networks Using FairVIC [0.0]
Mitigating bias in automated decision-making systems, specifically deep learning models, is a critical challenge in achieving fairness.
We introduce FairVIC, an innovative approach designed to enhance fairness in neural networks by addressing inherent biases at the training stage.
We observe a significant improvement in fairness across all metrics tested, without compromising the model's accuracy to a detrimental extent.
arXiv Detail & Related papers (2024-04-28T10:10:21Z) - An ExplainableFair Framework for Prediction of Substance Use Disorder Treatment Completion [2.863968392011842]
The objective of this study was to develop and implement a framework for addressing fairness and explainability.
We propose an explainable fairness framework, first developing a model with optimized performance, and then using an in-processing approach to mitigate model biases.
Our resulting-fairness enhanced models retain high sensitivity with improved fairness and explanations of the fairness-enhancement that may provide helpful insights for healthcare providers to guide clinical decision-making and resource allocation.
arXiv Detail & Related papers (2024-04-04T23:30:01Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - FedFair: Training Fair Models In Cross-Silo Federated Learning [47.63052284529811]
We develop FedFair, a well-designed federated learning framework, which can successfully train a fair model with high performance without any data privacy infringement.
Our experiments on three real-world data sets demonstrate the excellent fair model training performance of our method.
arXiv Detail & Related papers (2021-09-13T01:30:04Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.