Fair Wrapping for Black-box Predictions
- URL: http://arxiv.org/abs/2201.12947v1
- Date: Mon, 31 Jan 2022 01:02:39 GMT
- Title: Fair Wrapping for Black-box Predictions
- Authors: Alexander Soen, Ibrahim Alabdulmohsin, Sanmi Koyejo, Yishay Mansour,
Nyalleng Moorosi, Richard Nock, Ke Sun, Lexing Xie
- Abstract summary: We learn a wrapper function which we define as an alpha-tree, which modifies the prediction.
We show that our modification has appealing properties in terms of composition ofalpha-trees, generalization, interpretability, and KL divergence between modified and original predictions.
- Score: 105.10203274098862
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a new family of techniques to post-process ("wrap") a black-box
classifier in order to reduce its bias. Our technique builds on the recent
analysis of improper loss functions whose optimisation can correct any twist in
prediction, unfairness being treated as a twist. In the post-processing, we
learn a wrapper function which we define as an {\alpha}-tree, which modifies
the prediction. We provide two generic boosting algorithms to learn
{\alpha}-trees. We show that our modification has appealing properties in terms
of composition of{\alpha}-trees, generalization, interpretability, and KL
divergence between modified and original predictions. We exemplify the use of
our technique in three fairness notions: conditional value at risk, equality of
opportunity, and statistical parity; and provide experiments on several readily
available datasets.
Related papers
- AITTI: Learning Adaptive Inclusive Token for Text-to-Image Generation [53.65701943405546]
We learn adaptive inclusive tokens to shift the attribute distribution of the final generative outputs.
Our method requires neither explicit attribute specification nor prior knowledge of the bias distribution.
Our method achieves comparable performance to models that require specific attributes or editing directions for generation.
arXiv Detail & Related papers (2024-06-18T17:22:23Z) - Treeffuser: Probabilistic Predictions via Conditional Diffusions with Gradient-Boosted Trees [39.9546129327526]
Treeffuser is an easy-to-use method for probabilistic prediction on tabular data.
Treeffuser learns well-calibrated predictive distributions and can handle a wide range of regression tasks.
We demonstrate its versatility with an application to inventory allocation under uncertainty using sales data from Walmart.
arXiv Detail & Related papers (2024-06-11T18:59:24Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Probabilistic Regression with Huber Distributions [6.681943980068051]
We describe a probabilistic method for estimating the position of an object along with its covariance matrix using neural networks.
Our method is designed to be robust to outliers, have bounded gradients with respect to the network outputs, among other desirable properties.
We evaluate our method on popular body pose and facial landmark datasets and get performance on par or exceeding the performance of non-heatmap methods.
arXiv Detail & Related papers (2021-11-19T16:12:15Z) - GIFAIR-FL: An Approach for Group and Individual Fairness in Federated
Learning [8.121462458089143]
In this paper we propose textttGIFAIR-FL: an approach that retains group and individual settings.
We show convergence in non-$i.i.d.$ and strongly convex settings.
Compared to existing algorithms, our method shows improved results while superior or similar prediction accuracy.
arXiv Detail & Related papers (2021-08-05T17:13:43Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Causality-based Counterfactual Explanation for Classification Models [11.108866104714627]
We propose a prototype-based counterfactual explanation framework (ProCE)
ProCE is capable of preserving the causal relationship underlying the features of the counterfactual data.
In addition, we design a novel gradient-free optimization based on the multi-objective genetic algorithm that generates the counterfactual explanations.
arXiv Detail & Related papers (2021-05-03T09:25:59Z) - Set Prediction without Imposing Structure as Conditional Density
Estimation [40.86881969839325]
We propose an alternative to training via set losses by viewing learning as conditional density estimation.
Our framework fits deep energy-based models and approximates the intractable likelihood with gradient-guided sampling.
Our approach is competitive with previous set prediction models on standard benchmarks.
arXiv Detail & Related papers (2020-10-08T16:49:16Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Regularizing Class-wise Predictions via Self-knowledge Distillation [80.76254453115766]
We propose a new regularization method that penalizes the predictive distribution between similar samples.
This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network.
Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve the generalization ability.
arXiv Detail & Related papers (2020-03-31T06:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.