Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine
Learning in Healthcare
- URL: http://arxiv.org/abs/2403.05235v1
- Date: Fri, 8 Mar 2024 11:51:00 GMT
- Title: Fairness-Aware Interpretable Modeling (FAIM) for Trustworthy Machine
Learning in Healthcare
- Authors: Mingxuan Liu, Yilin Ning, Yuhe Ke, Yuqing Shang, Bibhas Chakraborty,
Marcus Eng Hock Ong, Roger Vaughan, Nan Liu
- Abstract summary: We propose Fairness-Aware Interpretable Modeling (FAIM) to improve model fairness without compromising performance.
FAIM features an interactive interface to identify a "fairer" model from a set of high-performing models.
We show FAIM models not only exhibited satisfactory discriminatory performance but also significantly mitigated biases as measured by well-established fairness metrics.
- Score: 6.608905791768002
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The escalating integration of machine learning in high-stakes fields such as
healthcare raises substantial concerns about model fairness. We propose an
interpretable framework - Fairness-Aware Interpretable Modeling (FAIM), to
improve model fairness without compromising performance, featuring an
interactive interface to identify a "fairer" model from a set of
high-performing models and promoting the integration of data-driven evidence
and clinical expertise to enhance contextualized fairness. We demonstrated
FAIM's value in reducing sex and race biases by predicting hospital admission
with two real-world databases, MIMIC-IV-ED and SGH-ED. We show that for both
datasets, FAIM models not only exhibited satisfactory discriminatory
performance but also significantly mitigated biases as measured by
well-established fairness metrics, outperforming commonly used bias-mitigation
methods. Our approach demonstrates the feasibility of improving fairness
without sacrificing performance and provides an a modeling mode that invites
domain experts to engage, fostering a multidisciplinary effort toward tailored
AI fairness.
Related papers
- From Efficiency to Equity: Measuring Fairness in Preference Learning [3.2132738637761027]
We evaluate fairness in preference learning models inspired by economic theories of inequality and Rawlsian justice.
We propose metrics adapted from the Gini Coefficient, Atkinson Index, and Kuznets Ratio to quantify fairness in these models.
arXiv Detail & Related papers (2024-10-24T15:25:56Z) - FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - FairFML: Fair Federated Machine Learning with a Case Study on Reducing Gender Disparities in Cardiac Arrest Outcome Prediction [10.016644624468762]
We present Fair Federated Machine Learning (FairFML), a model-agnostic solution designed to reduce algorithmic bias in cross-institutional healthcare collaborations.
As a proof of concept, we validated FairFML using a real-world clinical case study focused on reducing gender disparities in cardiac arrest outcome prediction.
Our findings show that FairFML improves model fairness by up to 65% compared to the centralized model, while maintaining performance comparable to both local and centralized models.
arXiv Detail & Related papers (2024-10-07T13:02:04Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z) - Ethical Adversaries: Towards Mitigating Unfairness with Adversarial
Machine Learning [8.436127109155008]
Individuals, as well as organisations, notice, test, and criticize unfair results to hold model designers and deployers accountable.
We offer a framework that assists these groups in mitigating unfair representations stemming from the training datasets.
Our framework relies on two inter-operating adversaries to improve fairness.
arXiv Detail & Related papers (2020-05-14T10:10:19Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.