BENN: Bias Estimation Using Deep Neural Network
- URL: http://arxiv.org/abs/2012.12537v1
- Date: Wed, 23 Dec 2020 08:25:35 GMT
- Title: BENN: Bias Estimation Using Deep Neural Network
- Authors: Amit Giloni and Edita Grolman and Tanja Hagemann and Ronald Fromm and
Sebastian Fischer and Yuval Elovici and Asaf Shabtai
- Abstract summary: We present BENN -- a novel bias estimation method that uses a pretrained unsupervised deep neural network.
Given a ML model and data samples, BENN provides a bias estimation for every feature based on the model's predictions.
We evaluated BENN using three benchmark datasets and one proprietary churn prediction model used by a European Telco.
- Score: 37.70583323420925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The need to detect bias in machine learning (ML) models has led to the
development of multiple bias detection methods, yet utilizing them is
challenging since each method: i) explores a different ethical aspect of bias,
which may result in contradictory output among the different methods, ii)
provides an output of a different range/scale and therefore, can't be compared
with other methods, and iii) requires different input, and therefore a human
expert needs to be involved to adjust each method according to the examined
model. In this paper, we present BENN -- a novel bias estimation method that
uses a pretrained unsupervised deep neural network. Given a ML model and data
samples, BENN provides a bias estimation for every feature based on the model's
predictions. We evaluated BENN using three benchmark datasets and one
proprietary churn prediction model used by a European Telco and compared it
with an ensemble of 21 existing bias estimation methods. Evaluation results
highlight the significant advantages of BENN over the ensemble, as it is
generic (i.e., can be applied to any ML model) and there is no need for a
domain expert, yet it provides bias estimations that are aligned with those of
the ensemble.
Related papers
- Debiased Recommendation with Noisy Feedback [41.38490962524047]
We study intersectional threats to the unbiased learning of the prediction model from data MNAR and OME in the collected data.
First, we design OME-EIB, OME-IPS, and OME-DR estimators, which largely extend the existing estimators to combat OME in real-world recommendation scenarios.
arXiv Detail & Related papers (2024-06-24T23:42:18Z) - MANO: Exploiting Matrix Norm for Unsupervised Accuracy Estimation Under Distribution Shifts [25.643876327918544]
Current logit-based methods are vulnerable to overconfidence issues, leading to prediction bias, especially under the natural shift.
We propose MaNo, which applies a data-dependent normalization on the logits to reduce prediction bias, and takes the $L_p$ norm of the matrix of normalized logits as the estimation score.
MaNo achieves state-of-the-art performance across various architectures in the presence of synthetic, natural, or subpopulation shifts.
arXiv Detail & Related papers (2024-05-29T10:45:06Z) - Addressing Bias Through Ensemble Learning and Regularized Fine-Tuning [0.2812395851874055]
This paper proposes a comprehensive approach using multiple methods to remove bias in AI models.
We train multiple models with the counter-bias of the pre-trained model through data splitting, local training, and regularized fine-tuning.
We conclude our solution with knowledge distillation that results in a single unbiased neural network.
arXiv Detail & Related papers (2024-02-01T09:24:36Z) - Fast Model Debias with Machine Unlearning [54.32026474971696]
Deep neural networks might behave in a biased manner in many real-world scenarios.
Existing debiasing methods suffer from high costs in bias labeling or model re-training.
We propose a fast model debiasing framework (FMD) which offers an efficient approach to identify, evaluate and remove biases.
arXiv Detail & Related papers (2023-10-19T08:10:57Z) - Satellite Anomaly Detection Using Variance Based Genetic Ensemble of
Neural Networks [7.848121055546167]
We use an efficient ensemble of the predictions from multiple Recurrent Neural Networks (RNNs)
For prediction, each RNN is guided by a Genetic Algorithm (GA) which constructs the optimal structure for each RNN model.
This paper uses the Monte Carlo (MC) dropout as an approximation version of BNNs.
arXiv Detail & Related papers (2023-02-10T22:09:00Z) - Predicting Out-of-Distribution Error with the Projection Norm [87.61489137914693]
Projection Norm predicts a model's performance on out-of-distribution data without access to ground truth labels.
We find that Projection Norm is the only approach that achieves non-trivial detection performance on adversarial examples.
arXiv Detail & Related papers (2022-02-11T18:58:21Z) - General Greedy De-bias Learning [163.65789778416172]
We propose a General Greedy De-bias learning framework (GGD), which greedily trains the biased models and the base model like gradient descent in functional space.
GGD can learn a more robust base model under the settings of both task-specific biased models with prior knowledge and self-ensemble biased model without prior knowledge.
arXiv Detail & Related papers (2021-12-20T14:47:32Z) - LOGAN: Local Group Bias Detection by Clustering [86.38331353310114]
We argue that evaluating bias at the corpus level is not enough for understanding how biases are embedded in a model.
We propose LOGAN, a new bias detection technique based on clustering.
Experiments on toxicity classification and object classification tasks show that LOGAN identifies bias in a local region.
arXiv Detail & Related papers (2020-10-06T16:42:51Z) - One Versus all for deep Neural Network Incertitude (OVNNI)
quantification [12.734278426543332]
We propose a new technique to quantify the epistemic uncertainty of data easily.
This method consists in mixing the predictions of an ensemble of DNNs trained to classify One class vs All the other classes (OVA) with predictions from a standard DNN trained to perform All vs All (AVA) classification.
arXiv Detail & Related papers (2020-06-01T14:06:12Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.