Enhancing Multi-Attribute Fairness in Healthcare Predictive Modeling
- URL: http://arxiv.org/abs/2501.13219v1
- Date: Wed, 22 Jan 2025 21:02:08 GMT
- Title: Enhancing Multi-Attribute Fairness in Healthcare Predictive Modeling
- Authors: Xiaoyang Wang, Christopher C. Yang,
- Abstract summary: We introduce a novel approach to multi-attribute fairness optimization in healthcare AI, tackling fairness concerns across multiple demographic attributes concurrently.
Our results show a significant reduction in Equalized Odds Disparity (EOD) for multiple attributes, while maintaining high predictive accuracy.
- Score: 3.997371369137763
- License:
- Abstract: Artificial intelligence (AI) systems in healthcare have demonstrated remarkable potential to improve patient outcomes. However, if not designed with fairness in mind, they also carry the risks of perpetuating or exacerbating existing health disparities. Although numerous fairness-enhancing techniques have been proposed, most focus on a single sensitive attribute and neglect the broader impact that optimizing fairness for one attribute may have on the fairness of other sensitive attributes. In this work, we introduce a novel approach to multi-attribute fairness optimization in healthcare AI, tackling fairness concerns across multiple demographic attributes concurrently. Our method follows a two-phase approach: initially optimizing for predictive performance, followed by fine-tuning to achieve fairness across multiple sensitive attributes. We develop our proposed method using two strategies, sequential and simultaneous. Our results show a significant reduction in Equalized Odds Disparity (EOD) for multiple attributes, while maintaining high predictive accuracy. Notably, we demonstrate that single-attribute fairness methods can inadvertently increase disparities in non-targeted attributes whereas simultaneous multi-attribute optimization achieves more balanced fairness improvements across all attributes. These findings highlight the importance of comprehensive fairness strategies in healthcare AI and offer promising directions for future research in this critical area.
Related papers
- FairDgcl: Fairness-aware Recommendation with Dynamic Graph Contrastive Learning [48.38344934125999]
We study how to implement high-quality data augmentation to improve recommendation fairness.
Specifically, we propose FairDgcl, a dynamic graph adversarial contrastive learning framework.
We show that FairDgcl can simultaneously generate enhanced representations that possess both fairness and accuracy.
arXiv Detail & Related papers (2024-10-23T04:43:03Z) - Fair Recommendations with Limited Sensitive Attributes: A Distributionally Robust Optimization Approach [46.61096160935783]
We propose Distributionally Robust Fair Optimization (DRFO) to ensure fairness in recommender systems.
DRFO minimizes the worst-case unfairness over all potential probability distributions of missing sensitive attributes.
We provide theoretical and empirical evidence to demonstrate that our method can effectively ensure fairness in recommender systems.
arXiv Detail & Related papers (2024-05-02T07:40:51Z) - Class-attribute Priors: Adapting Optimization to Heterogeneity and
Fairness Objective [54.33066660817495]
Modern classification problems exhibit heterogeneities across individual classes.
We propose CAP: An effective and general method that generates a class-specific learning strategy.
We show that CAP is competitive with prior art and its flexibility unlocks clear benefits for fairness objectives beyond balanced accuracy.
arXiv Detail & Related papers (2024-01-25T17:43:39Z) - Achieve Fairness without Demographics for Dermatological Disease
Diagnosis [17.792332189055223]
We propose a method enabling fair predictions for sensitive attributes during the testing phase without using such information during training.
Inspired by prior work highlighting the impact of feature entanglement on fairness, we enhance the model features by capturing the features related to the sensitive and target attributes.
This ensures that the model can only classify based on the features related to the target attribute without relying on features associated with sensitive attributes.
arXiv Detail & Related papers (2024-01-16T02:49:52Z) - Muffin: A Framework Toward Multi-Dimension AI Fairness by Uniting
Off-the-Shelf Models [9.01924639426239]
Model fairness (a.k.a. bias) has become one of the most critical problems in a wide range of AI applications.
We propose a novel Multi- Dimension Fairness framework, namely Muffin, which includes an automatic tool to unite off-the-shelf models to improve the fairness on multiple attributes simultaneously.
arXiv Detail & Related papers (2023-08-26T02:04:10Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Adaptive Fairness Improvement Based on Causality Analysis [5.827653543633839]
Given a discriminating neural network, the problem of fairness improvement is to systematically reduce discrimination without significantly scarifies its performance.
We propose an approach which adaptively chooses the fairness improving method based on causality analysis.
Our approach is effective (i.e., always identify the best fairness improving method) and efficient (i.e., with an average time overhead of 5 minutes)
arXiv Detail & Related papers (2022-09-15T10:05:31Z) - Multiple Attribute Fairness: Application to Fraud Detection [0.0]
We propose a fairness measure relaxing the equality conditions in the popular equal odds fairness regime for classification.
We design an iterative-agnostic, grid-based model that calibrates the outcomes per sensitive attribute value to conform to the measure.
arXiv Detail & Related papers (2022-07-28T19:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.