(Un)fairness in Post-operative Complication Prediction Models
- URL: http://arxiv.org/abs/2011.02036v1
- Date: Tue, 3 Nov 2020 22:11:19 GMT
- Title: (Un)fairness in Post-operative Complication Prediction Models
- Authors: Sandhya Tripathi, Bradley A. Fritz, Mohamed Abdelhack, Michael S.
Avidan, Yixin Chen, Christopher R. King
- Abstract summary: We consider a real-life example of risk estimation before surgery and investigate the potential for bias or unfairness of a variety of algorithms.
Our approach creates transparent documentation of potential bias so that the users can apply the model carefully.
- Score: 20.16366948502659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the current ongoing debate about fairness, explainability and
transparency of machine learning models, their application in high-impact
clinical decision-making systems must be scrutinized. We consider a real-life
example of risk estimation before surgery and investigate the potential for
bias or unfairness of a variety of algorithms. Our approach creates transparent
documentation of potential bias so that the users can apply the model
carefully. We augment a model-card like analysis using propensity scores with a
decision-tree based guide for clinicians that would identify predictable
shortcomings of the model. In addition to functioning as a guide for users, we
propose that it can guide the algorithm development and informatics team to
focus on data sources and structures that can address these shortcomings.
Related papers
- Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Selecting Interpretability Techniques for Healthcare Machine Learning models [69.65384453064829]
In healthcare there is a pursuit for employing interpretable algorithms to assist healthcare professionals in several decision scenarios.
We overview a selection of eight algorithms, both post-hoc and model-based, that can be used for such purposes.
arXiv Detail & Related papers (2024-06-14T17:49:04Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - Assisting clinical practice with fuzzy probabilistic decision trees [2.0999441362198907]
We propose FPT, a novel method that combines probabilistic trees and fuzzy logic to assist clinical practice.
We show that FPT and its predictions can assist clinical practice in an intuitive manner, with the use of a user-friendly interface specifically designed for this purpose.
arXiv Detail & Related papers (2023-04-16T14:05:16Z) - Towards Trustable Skin Cancer Diagnosis via Rewriting Model's Decision [12.306688233127312]
We introduce a human-in-the-loop framework in the model training process.
Our method can automatically discover confounding factors.
It is capable of learning confounding concepts using easily obtained concept exemplars.
arXiv Detail & Related papers (2023-03-02T01:02:18Z) - Against Algorithmic Exploitation of Human Vulnerabilities [2.6918074738262194]
We are concerned with the problem of machine learning models inadvertently modelling vulnerabilities.
We describe common vulnerabilities, and illustrate cases where they are likely to play a role in algorithmic decision-making.
We propose a set of requirements for methods to detect the potential for vulnerability modelling.
arXiv Detail & Related papers (2023-01-12T13:15:24Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - FairLens: Auditing Black-box Clinical Decision Support Systems [1.9634272907216734]
We introduce FairLens, a methodology for discovering and explaining biases.
We show how our tool can be used to audit a fictional commercial black-box model acting as a clinical decision support system.
arXiv Detail & Related papers (2020-11-08T18:40:50Z) - Plausible Counterfactuals: Auditing Deep Learning Classifiers with
Realistic Adversarial Examples [84.8370546614042]
Black-box nature of Deep Learning models has posed unanswered questions about what they learn from data.
Generative Adversarial Network (GAN) and multi-objectives are used to furnish a plausible attack to the audited model.
Its utility is showcased within a human face classification task, unveiling the enormous potential of the proposed framework.
arXiv Detail & Related papers (2020-03-25T11:08:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.