FRAMM: Fair Ranking with Missing Modalities for Clinical Trial Site
Selection
- URL: http://arxiv.org/abs/2305.19407v1
- Date: Tue, 30 May 2023 20:44:14 GMT
- Title: FRAMM: Fair Ranking with Missing Modalities for Clinical Trial Site
Selection
- Authors: Brandon Theodorou, Lucas Glass, Cao Xiao, and Jimeng Sun
- Abstract summary: This paper focuses on the trial site selection task and proposes FRAMM, a deep reinforcement learning framework for fair trial site selection.
We focus on addressing two real-world challenges that affect fair trial sites selection: the data modalities are often not complete for many potential trial sites, and the site selection needs to simultaneously optimize for both enrollment and diversity.
We evaluate FRAMM using 4,392 real-world clinical trials ranging from 2016 to 2021 and show that FRAMM outperforms the leading baseline in enrollment-only settings.
- Score: 55.2629939137135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite many efforts to address the disparities, the underrepresentation of
gender, racial, and ethnic minorities in clinical trials remains a problem and
undermines the efficacy of treatments on minorities. This paper focuses on the
trial site selection task and proposes FRAMM, a deep reinforcement learning
framework for fair trial site selection. We focus on addressing two real-world
challenges that affect fair trial sites selection: the data modalities are
often not complete for many potential trial sites, and the site selection needs
to simultaneously optimize for both enrollment and diversity since the problem
is necessarily a trade-off between the two with the only possible way to
increase diversity post-selection being through limiting enrollment via caps.
To address the missing data challenge, FRAMM has a modality encoder with a
masked cross-attention mechanism for handling missing data, bypassing data
imputation and the need for complete data in training. To handle the need for
making efficient trade-offs, FRAMM uses deep reinforcement learning with a
specifically designed reward function that simultaneously optimizes for both
enrollment and fairness.
We evaluate FRAMM using 4,392 real-world clinical trials ranging from 2016 to
2021 and show that FRAMM outperforms the leading baseline in enrollment-only
settings while also achieving large gains in diversity. Specifically, it is
able to produce a 9% improvement in diversity with similar enrollment levels
over the leading baselines. That improved diversity is further manifested in
achieving up to a 14% increase in Hispanic enrollment, 27% increase in Black
enrollment, and 60% increase in Asian enrollment compared to selecting sites
with an enrollment-only model.
Related papers
- FERI: A Multitask-based Fairness Achieving Algorithm with Applications to Fair Organ Transplantation [15.481475313958219]
We introduce Fairness through the Equitable Rate of Improvement in Multitask Learning (FERI) algorithm for fair predictions of graft failure risk in liver transplant patients.
FERI constrains subgroup loss by balancing learning rates and preventing subgroup dominance in the training process.
arXiv Detail & Related papers (2023-10-20T21:14:07Z) - Fairness-enhancing mixed effects deep learning improves fairness on in- and out-of-distribution clustered (non-iid) data [6.596656267996196]
We introduce the Fair Mixed Effects Deep Learning (Fair MEDL) framework.
Fair MEDL quantifies cluster-invariant fixed effects (FE) and cluster-specific random effects (RE)
We incorporate adversarial debiasing to promote fairness across three key metrics: Equalized Odds, Demographic Parity, and Counterfactual Fairness.
arXiv Detail & Related papers (2023-10-04T20:18:45Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - FedVal: Different good or different bad in federated learning [9.558549875692808]
Federated learning (FL) systems are susceptible to attacks from malicious actors.
FL poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups.
Traditional methods used to address such biases require centralized access to the data, which FL systems do not have.
We present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients.
arXiv Detail & Related papers (2023-06-06T22:11:13Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - FedABC: Targeting Fair Competition in Personalized Federated Learning [76.9646903596757]
Federated learning aims to collaboratively train models without accessing their client's local private data.
We propose a novel and generic PFL framework termed Federated Averaging via Binary Classification, dubbed FedABC.
In particular, we adopt the one-vs-all'' training strategy in each client to alleviate the unfair competition between classes.
arXiv Detail & Related papers (2023-02-15T03:42:59Z) - Clinical trial site matching with improved diversity using fair policy
learning [56.01170456417214]
We learn a model that maps a clinical trial description to a ranked list of potential trial sites.
Unlike existing fairness frameworks, the group membership of each trial site is non-binary.
We propose fairness criteria based on demographic parity to address such a multi-group membership scenario.
arXiv Detail & Related papers (2022-04-13T16:35:28Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - Deep F-measure Maximization for End-to-End Speech Understanding [52.36496114728355]
We propose a differentiable approximation to the F-measure and train the network with this objective using standard backpropagation.
We perform experiments on two standard fairness datasets, Adult, Communities and Crime, and also on speech-to-intent detection on the ATIS dataset and speech-to-image concept classification on the Speech-COCO dataset.
In all four of these tasks, F-measure results in improved micro-F1 scores, with absolute improvements of up to 8% absolute, as compared to models trained with the cross-entropy loss function.
arXiv Detail & Related papers (2020-08-08T03:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.