Prune Responsibly
- URL: http://arxiv.org/abs/2009.09936v1
- Date: Thu, 10 Sep 2020 04:43:11 GMT
- Title: Prune Responsibly
- Authors: Michela Paganini
- Abstract summary: Irrespective of the specific definition of fairness in a machine learning application, pruning the underlying model affects it.
We investigate and document the emergence and exacerbation of undesirable per-class performance imbalances, across tasks and architectures, for almost one million categories considered across over 100K image classification models that undergo a pruning process.
We demonstrate the need for transparent reporting, inclusive of bias, fairness, and inclusion metrics, in real-life engineering decision-making around neural network pruning.
- Score: 0.913755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Irrespective of the specific definition of fairness in a machine learning
application, pruning the underlying model affects it. We investigate and
document the emergence and exacerbation of undesirable per-class performance
imbalances, across tasks and architectures, for almost one million categories
considered across over 100K image classification models that undergo a pruning
process.We demonstrate the need for transparent reporting, inclusive of bias,
fairness, and inclusion metrics, in real-life engineering decision-making
around neural network pruning. In response to the calls for quantitative
evaluation of AI models to be population-aware, we present neural network
pruning as a tangible application domain where the ways in which
accuracy-efficiency trade-offs disproportionately affect underrepresented or
outlier groups have historically been overlooked. We provide a simple,
Pareto-based framework to insert fairness considerations into value-based
operating point selection processes, and to re-evaluate pruning technique
choices.
Related papers
- Fairness Evaluation for Uplift Modeling in the Absence of Ground Truth [2.946562343070891]
We propose a framework that overcomes the missing ground truth problem by generating surrogates.
We show how to apply the approach in a comprehensive study from a real-world marketing campaign.
arXiv Detail & Related papers (2024-02-12T06:13:24Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Investigating the Limitation of CLIP Models: The Worst-Performing
Categories [53.360239882501325]
Contrastive Language-Image Pre-training (CLIP) provides a foundation model by integrating natural language into visual concepts.
It is usually expected that satisfactory overall accuracy can be achieved across numerous domains through well-designed textual prompts.
However, we found that their performance in the worst categories is significantly inferior to the overall performance.
arXiv Detail & Related papers (2023-10-05T05:37:33Z) - Model Debiasing via Gradient-based Explanation on Representation [14.673988027271388]
We propose a novel fairness framework that performs debiasing with regard to sensitive attributes and proxy attributes.
Our framework achieves better fairness-accuracy trade-off on unstructured and structured datasets than previous state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-20T11:57:57Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Fair Infinitesimal Jackknife: Mitigating the Influence of Biased
Training Data Points Without Refitting [41.96570350954332]
We propose an algorithm that improves the fairness of a pre-trained classifier by simply dropping carefully selected training data points.
We find that such an intervention does not substantially reduce the predictive performance of the model but drastically improves the fairness metric.
arXiv Detail & Related papers (2022-12-13T18:36:19Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Towards Threshold Invariant Fair Classification [10.317169065327546]
This paper introduces the notion of threshold invariant fairness, which enforces equitable performances across different groups independent of the decision threshold.
Experimental results demonstrate that the proposed methodology is effective to alleviate the threshold sensitivity in machine learning models designed to achieve fairness.
arXiv Detail & Related papers (2020-06-18T16:49:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.