Multi-disciplinary fairness considerations in machine learning for
clinical trials
- URL: http://arxiv.org/abs/2205.08875v1
- Date: Wed, 18 May 2022 11:59:22 GMT
- Title: Multi-disciplinary fairness considerations in machine learning for
clinical trials
- Authors: Isabel Chien, Nina Deliu, Richard E. Turner, Adrian Weller, Sofia S.
Villar, Niki Kilbertus
- Abstract summary: We focus on clinical trials, i.e., research studies conducted on humans to evaluate medical treatments.
Our aim is to provide a multi-disciplinary assessment of how fairness for machine learning fits into the context of clinical trials research and practice.
- Score: 43.00377806138086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While interest in the application of machine learning to improve healthcare
has grown tremendously in recent years, a number of barriers prevent deployment
in medical practice. A notable concern is the potential to exacerbate
entrenched biases and existing health disparities in society. The area of
fairness in machine learning seeks to address these issues of equity; however,
appropriate approaches are context-dependent, necessitating domain-specific
consideration. We focus on clinical trials, i.e., research studies conducted on
humans to evaluate medical treatments. Clinical trials are a relatively
under-explored application in machine learning for healthcare, in part due to
complex ethical, legal, and regulatory requirements and high costs. Our aim is
to provide a multi-disciplinary assessment of how fairness for machine learning
fits into the context of clinical trials research and practice. We start by
reviewing the current ethical considerations and guidelines for clinical trials
and examine their relationship with common definitions of fairness in machine
learning. We examine potential sources of unfairness in clinical trials,
providing concrete examples, and discuss the role machine learning might play
in either mitigating potential biases or exacerbating them when applied without
care. Particular focus is given to adaptive clinical trials, which may employ
machine learning. Finally, we highlight concepts that require further
investigation and development, and emphasize new approaches to fairness that
may be relevant to the design of clinical trials.
Related papers
- TrialDura: Hierarchical Attention Transformer for Interpretable Clinical Trial Duration Prediction [19.084936647082632]
We propose TrialDura, a machine learning-based method that estimates the duration of clinical trials using multimodal data.
We encode them into Bio-BERT embeddings specifically tuned for biomedical contexts to provide a deeper and more relevant semantic understanding.
Our proposed model demonstrated superior performance with a mean absolute error (MAE) of 1.04 years and a root mean square error (RMSE) of 1.39 years compared to the other models.
arXiv Detail & Related papers (2024-04-20T02:12:59Z) - AutoTrial: Prompting Language Models for Clinical Trial Design [53.630479619856516]
We present a method named AutoTrial to aid the design of clinical eligibility criteria using language models.
Experiments on over 70K clinical trials verify that AutoTrial generates high-quality criteria texts.
arXiv Detail & Related papers (2023-05-19T01:04:16Z) - Towards Fair Patient-Trial Matching via Patient-Criterion Level Fairness
Constraint [50.35075018041199]
This work proposes a fair patient-trial matching framework by generating a patient-criterion level fairness constraint.
The experimental results on real-world patient-trial and patient-criterion matching tasks demonstrate that the proposed framework can successfully alleviate the predictions that tend to be biased.
arXiv Detail & Related papers (2023-03-24T03:59:19Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Fair Machine Learning in Healthcare: A Review [90.22219142430146]
We analyze the intersection of fairness in machine learning and healthcare disparities.
We provide a critical review of the associated fairness metrics from a machine learning standpoint.
We propose several new research directions that hold promise for developing ethical and equitable ML applications in healthcare.
arXiv Detail & Related papers (2022-06-29T04:32:10Z) - Causal Machine Learning for Healthcare and Precision Medicine [16.846051073534966]
Causal machine learning (CML) has experienced increasing popularity in healthcare.
We explore how causal inference can be incorporated into different aspects of clinical decision support systems.
arXiv Detail & Related papers (2022-05-23T15:45:21Z) - Clinical trial site matching with improved diversity using fair policy
learning [56.01170456417214]
We learn a model that maps a clinical trial description to a ranked list of potential trial sites.
Unlike existing fairness frameworks, the group membership of each trial site is non-binary.
We propose fairness criteria based on demographic parity to address such a multi-group membership scenario.
arXiv Detail & Related papers (2022-04-13T16:35:28Z) - Deploying clinical machine learning? Consider the following... [4.320268614534372]
We believe a lack of appreciation for several considerations are a major cause for this discrepancy between expectation and reality.
We identify several main categories of challenges in order to better design and develop clinical machine learning applications.
arXiv Detail & Related papers (2021-09-14T18:41:36Z) - An Empirical Characterization of Fair Machine Learning For Clinical Risk
Prediction [7.945729033499554]
The use of machine learning to guide clinical decision making has the potential to worsen existing health disparities.
Several recent works frame the problem as that of algorithmic fairness, a framework that has attracted considerable attention and criticism.
We conduct an empirical study to characterize the impact of penalizing group fairness violations on an array of measures of model performance and group fairness.
arXiv Detail & Related papers (2020-07-20T17:46:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.