DemOpts: Fairness corrections in COVID-19 case prediction models
- URL: http://arxiv.org/abs/2405.09483v2
- Date: Mon, 20 May 2024 14:34:32 GMT
- Title: DemOpts: Fairness corrections in COVID-19 case prediction models
- Authors: Naman Awasthi, Saad Abrar, Daniel Smolyak, Vanessa Frias-Martinez,
- Abstract summary: We show that state of the art deep learning models output mean prediction errors that are significantly different across racial and ethnic groups.
We propose a novel de-biasing method, DemOpts, to increase the fairness of deep learning based forecasting models trained on potentially biased datasets.
- Score: 0.24999074238880484
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: COVID-19 forecasting models have been used to inform decision making around resource allocation and intervention decisions e.g., hospital beds or stay-at-home orders. State of the art deep learning models often use multimodal data such as mobility or socio-demographic data to enhance COVID-19 case prediction models. Nevertheless, related work has revealed under-reporting bias in COVID-19 cases as well as sampling bias in mobility data for certain minority racial and ethnic groups, which could in turn affect the fairness of the COVID-19 predictions along race labels. In this paper, we show that state of the art deep learning models output mean prediction errors that are significantly different across racial and ethnic groups; and which could, in turn, support unfair policy decisions. We also propose a novel de-biasing method, DemOpts, to increase the fairness of deep learning based forecasting models trained on potentially biased datasets. Our results show that DemOpts can achieve better error parity that other state of the art de-biasing approaches, thus effectively reducing the differences in the mean error distributions across more racial and ethnic groups.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Auditing the Fairness of COVID-19 Forecast Hub Case Prediction Models [0.24999074238880484]
The COVID-19 Forecast Hub is used by the Centers for Disease Control and Prevention (CDC) for their official COVID-19 communications.
By focusing exclusively on prediction accuracy, the Forecast Hub fails to evaluate whether the proposed models have similar performance across social determinants.
We show statistically significant predictive performance across social determinants, with minority racial and ethnic groups as well as less urbanized areas often associated with higher prediction errors.
arXiv Detail & Related papers (2024-05-17T21:07:19Z) - Assessing the Impact of Case Correction Methods on the Fairness of COVID-19 Predictive Models [0.24999074238880484]
Two case correction methods are investigated for their impact on a COVID-19 case prediction task.
One of the correction methods improves fairness, decreasing differences in performance between majority-White and majority-minority counties.
While these results are mixed, it is evident that correction methods have the potential to exacerbate existing biases in COVID-19 case data.
arXiv Detail & Related papers (2024-05-16T16:26:21Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Is Your Model "MADD"? A Novel Metric to Evaluate Algorithmic Fairness
for Predictive Student Models [0.0]
We propose a novel metric, the Model Absolute Density Distance (MADD), to analyze models' discriminatory behaviors.
We evaluate our approach on the common task of predicting student success in online courses, using several common predictive classification models.
arXiv Detail & Related papers (2023-05-24T16:55:49Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - Debiasing Vision-Language Models via Biased Prompts [79.04467131711775]
We propose a general approach for debiasing vision-language foundation models by projecting out biased directions in the text embedding.
We show that debiasing only the text embedding with a calibrated projection matrix suffices to yield robust classifiers and fair generative models.
arXiv Detail & Related papers (2023-01-31T20:09:33Z) - A fairness assessment of mobility-based COVID-19 case prediction models [0.0]
We tested the hypothesis that bias in the mobility data used to train the predictive models might lead to unfairly less accurate predictions for certain demographic groups.
Specifically, the models tend to favor large, highly educated, wealthy young, urban, and non-black-dominated counties.
arXiv Detail & Related papers (2022-10-08T03:43:51Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Combining Graph Neural Networks and Spatio-temporal Disease Models to
Predict COVID-19 Cases in Germany [0.0]
Several experts have called for the necessity to account for human mobility to explain the spread of COVID-19.
Most statistical or epidemiological models cannot directly incorporate unstructured data sources, including data that may encode human mobility.
We propose a trade-off between both research directions and present a novel learning approach that combines the advantages of statistical regression and machine learning models.
arXiv Detail & Related papers (2021-01-03T16:39:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.