Assessing Fairness in Classification Parity of Machine Learning Models
in Healthcare
- URL: http://arxiv.org/abs/2102.03717v1
- Date: Sun, 7 Feb 2021 04:46:27 GMT
- Title: Assessing Fairness in Classification Parity of Machine Learning Models
in Healthcare
- Authors: Ming Yuan, Vikas Kumar, Muhammad Aurangzeb Ahmad, Ankur Teredesai
- Abstract summary: We present preliminary results on fairness in the context of classification parity in healthcare.
We also present some exploratory methods to improve fairness and choosing appropriate classification algorithms in the context of healthcare.
- Score: 19.33981403623531
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in AI and machine learning systems has become a fundamental problem
in the accountability of AI systems. While the need for accountability of AI
models is near ubiquitous, healthcare in particular is a challenging field
where accountability of such systems takes upon additional importance, as
decisions in healthcare can have life altering consequences. In this paper we
present preliminary results on fairness in the context of classification parity
in healthcare. We also present some exploratory methods to improve fairness and
choosing appropriate classification algorithms in the context of healthcare.
Related papers
- Understanding Fairness in Recommender Systems: A Healthcare Perspective [0.18416014644193066]
This paper explores the public's comprehension of fairness in healthcare recommendations.
We conducted a survey where participants selected from four fairness metrics.
Results suggest that a one-size-fits-all approach to fairness may be insufficient.
arXiv Detail & Related papers (2024-09-05T19:59:42Z) - Towards Clinical AI Fairness: Filling Gaps in the Puzzle [15.543248260582217]
This review systematically pinpoints several deficiencies concerning both healthcare data and the provided AI fairness solutions.
We highlight the scarcity of research on AI fairness in many medical domains where AI technology is increasingly utilized.
To bridge these gaps, our review advances actionable strategies for both the healthcare and AI research communities.
arXiv Detail & Related papers (2024-05-28T07:42:55Z) - Towards FATE in AI for Social Media and Healthcare: A Systematic Review [0.0]
This survey focuses on the concepts of fairness, accountability, transparency, and ethics (FATE) within the context of AI.
We found that statistical and intersectional fairness can support fairness in healthcare on social media platforms.
While solutions like simulation, data analytics, and automated systems are widely used, their effectiveness can vary.
arXiv Detail & Related papers (2023-06-05T17:25:42Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Algorithm Fairness in AI for Medicine and Healthcare [4.626801344708786]
algorithm fairness is a challenging problem in delivering equitable care.
Recent evaluation of AI models stratified across race sub-populations have revealed enormous inequalities in how patients are diagnosed, given treatments, and billed for healthcare costs.
arXiv Detail & Related papers (2021-10-01T18:18:13Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Technical Challenges for Training Fair Neural Networks [62.466658247995404]
We conduct experiments on both facial recognition and automated medical diagnosis datasets using state-of-the-art architectures.
We observe that large models overfit to fairness objectives, and produce a range of unintended and undesirable consequences.
arXiv Detail & Related papers (2021-02-12T20:36:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.