TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
learning-based genomic analysis
- URL: http://arxiv.org/abs/2108.10132v1
- Date: Sat, 14 Aug 2021 17:02:02 GMT
- Title: TRAPDOOR: Repurposing backdoors to detect dataset bias in machine
learning-based genomic analysis
- Authors: Esha Sarkar, Michail Maniatakos
- Abstract summary: Under-representation of groups in datasets can lead to inaccurate predictions for certain groups, which can exacerbate systemic discrimination issues.
We propose TRAPDOOR, a methodology for identification of biased datasets by repurposing a technique that has been mostly proposed for nefarious purposes: Neural network backdoors.
Using a real-world cancer dataset, we analyze the dataset with the bias that already existed towards white individuals and also introduced biases in datasets artificially.
- Score: 15.483078145498085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning (ML) has achieved unprecedented performance in several
applications including image, speech, text, and data analysis. Use of ML to
understand underlying patterns in gene mutations (genomics) has far-reaching
results, not only in overcoming diagnostic pitfalls, but also in designing
treatments for life-threatening diseases like cancer. Success and
sustainability of ML algorithms depends on the quality and diversity of data
collected and used for training. Under-representation of groups (ethnic groups,
gender groups, etc.) in such a dataset can lead to inaccurate predictions for
certain groups, which can further exacerbate systemic discrimination issues.
In this work, we propose TRAPDOOR, a methodology for identification of biased
datasets by repurposing a technique that has been mostly proposed for nefarious
purposes: Neural network backdoors. We consider a typical collaborative
learning setting of the genomics supply chain, where data may come from
hospitals, collaborative projects, or research institutes to a central cloud
without awareness of bias against a sensitive group. In this context, we
develop a methodology to leak potential bias information of the collective data
without hampering the genuine performance using ML backdooring catered for
genomic applications. Using a real-world cancer dataset, we analyze the dataset
with the bias that already existed towards white individuals and also
introduced biases in datasets artificially, and our experimental result show
that TRAPDOOR can detect the presence of dataset bias with 100% accuracy, and
furthermore can also extract the extent of bias by recovering the percentage
with a small error.
Related papers
- Approaching Metaheuristic Deep Learning Combos for Automated Data Mining [0.5419570023862531]
This work proposes a means of combining meta-heuristic methods with conventional classifiers and neural networks in order to perform automated data mining.
Experiments on the MNIST dataset for handwritten digit recognition were performed.
It was empirically observed that using a ground truth labeled dataset's validation accuracy is inadequate for correcting labels of other previously unseen data instances.
arXiv Detail & Related papers (2024-10-16T10:28:22Z) - DispaRisk: Auditing Fairness Through Usable Information [21.521208250966918]
DispaRisk is a framework designed to assess the potential risks of disparities in datasets during the initial stages of machine learning pipeline.
DispaRisk identifies datasets with a high risk of discrimination, detect model families prone to biases within an ML pipeline, and enhance the explainability of these bias risks.
This work contributes to the development of fairer ML systems by providing a robust tool for early bias detection and mitigation.
arXiv Detail & Related papers (2024-05-20T20:56:01Z) - An AI-Guided Data Centric Strategy to Detect and Mitigate Biases in
Healthcare Datasets [32.25265709333831]
We generate a data-centric, model-agnostic, task-agnostic approach to evaluate dataset bias by investigating the relationship between how easily different groups are learned at small sample sizes (AEquity)
We then apply a systematic analysis of AEq values across subpopulations to identify and manifestations of racial bias in two known cases in healthcare.
AEq is a novel and broadly applicable metric that can be applied to advance equity by diagnosing and remediating bias in healthcare datasets.
arXiv Detail & Related papers (2023-11-06T17:08:41Z) - Towards Assessing Data Bias in Clinical Trials [0.0]
Health care datasets can still be affected by data bias.
Data bias provides a distorted view of reality, leading to wrong analysis results and, consequently, decisions.
This paper proposes a method to address bias in datasets that: (i) defines the types of data bias that may be present in the dataset, (ii) characterizes and quantifies data bias with adequate metrics, and (iii) provides guidelines to identify, measure, and mitigate data bias for different data sources.
arXiv Detail & Related papers (2022-12-19T17:10:06Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Predicting Seriousness of Injury in a Traffic Accident: A New Imbalanced
Dataset and Benchmark [62.997667081978825]
The paper introduces a new dataset to assess the performance of machine learning algorithms in the prediction of the seriousness of injury in a traffic accident.
The dataset is created by aggregating publicly available datasets from the UK Department for Transport.
arXiv Detail & Related papers (2022-05-20T21:15:26Z) - Equivariance Allows Handling Multiple Nuisance Variables When Analyzing
Pooled Neuroimaging Datasets [53.34152466646884]
In this paper, we show how bringing recent results on equivariant representation learning instantiated on structured spaces together with simple use of classical results on causal inference provides an effective practical solution.
We demonstrate how our model allows dealing with more than one nuisance variable under some assumptions and can enable analysis of pooled scientific datasets in scenarios that would otherwise entail removing a large portion of the samples.
arXiv Detail & Related papers (2022-03-29T04:54:06Z) - DoGR: Disaggregated Gaussian Regression for Reproducible Analysis of
Heterogeneous Data [4.720638420461489]
We introduce DoGR, a method that discovers latent confounders by simultaneously partitioning the data into overlapping clusters (disaggregation) and modeling the behavior within them (regression)
When applied to real-world data, our method discovers meaningful clusters and their characteristic behaviors, thus giving insight into group differences and their impact on the outcome of interest.
By accounting for latent confounders, our framework facilitates exploratory analysis of noisy, heterogeneous data and can be used to learn predictive models that better generalize to new data.
arXiv Detail & Related papers (2021-08-31T01:58:23Z) - Learning Bias-Invariant Representation by Cross-Sample Mutual
Information Minimization [77.8735802150511]
We propose a cross-sample adversarial debiasing (CSAD) method to remove the bias information misused by the target task.
The correlation measurement plays a critical role in adversarial debiasing and is conducted by a cross-sample neural mutual information estimator.
We conduct thorough experiments on publicly available datasets to validate the advantages of the proposed method over state-of-the-art approaches.
arXiv Detail & Related papers (2021-08-11T21:17:02Z) - Bootstrapping Your Own Positive Sample: Contrastive Learning With
Electronic Health Record Data [62.29031007761901]
This paper proposes a novel contrastive regularized clinical classification model.
We introduce two unique positive sampling strategies specifically tailored for EHR data.
Our framework yields highly competitive experimental results in predicting the mortality risk on real-world COVID-19 EHR data.
arXiv Detail & Related papers (2021-04-07T06:02:04Z) - Select-ProtoNet: Learning to Select for Few-Shot Disease Subtype
Prediction [55.94378672172967]
We focus on few-shot disease subtype prediction problem, identifying subgroups of similar patients.
We introduce meta learning techniques to develop a new model, which can extract the common experience or knowledge from interrelated clinical tasks.
Our new model is built upon a carefully designed meta-learner, called Prototypical Network, that is a simple yet effective meta learning machine for few-shot image classification.
arXiv Detail & Related papers (2020-09-02T02:50:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.