Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm
- URL: http://arxiv.org/abs/2009.04441v3
- Date: Tue, 8 Jun 2021 12:39:26 GMT
- Title: Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm
- Authors: Kirtan Padh, Diego Antognini, Emma Lejal Glaude, Boi Faltings, Claudiu
Musat
- Abstract summary: The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
- Score: 33.145522561104464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of fairness in classification is to learn a classifier that does not
discriminate against groups of individuals based on sensitive attributes, such
as race and gender. One approach to designing fair algorithms is to use
relaxations of fairness notions as regularization terms or in a constrained
optimization problem. We observe that the hyperbolic tangent function can
approximate the indicator function. We leverage this property to define a
differentiable relaxation that approximates fairness notions provably better
than existing relaxations. In addition, we propose a model-agnostic
multi-objective architecture that can simultaneously optimize for multiple
fairness notions and multiple sensitive attributes and supports all statistical
parity-based notions of fairness. We use our relaxation with the
multi-objective architecture to learn fair classifiers. Experiments on public
datasets show that our method suffers a significantly lower loss of accuracy
than current debiasing algorithms relative to the unconstrained model.
Related papers
- Classes Are Not Equal: An Empirical Study on Image Recognition Fairness [100.36114135663836]
We experimentally demonstrate that classes are not equal and the fairness issue is prevalent for image classification models across various datasets.
Our findings reveal that models tend to exhibit greater prediction biases for classes that are more challenging to recognize.
Data augmentation and representation learning algorithms improve overall performance by promoting fairness to some degree in image classification.
arXiv Detail & Related papers (2024-02-28T07:54:50Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Group Fairness with Uncertainty in Sensitive Attributes [34.608332397776245]
A fair predictive model is crucial to mitigate biased decisions against minority groups in high-stakes applications.
We propose a bootstrap-based algorithm that achieves the target level of fairness despite the uncertainty in sensitive attributes.
Our algorithm is applicable to both discrete and continuous sensitive attributes and is effective in real-world classification and regression tasks.
arXiv Detail & Related papers (2023-02-16T04:33:00Z) - Certifying Fairness of Probabilistic Circuits [33.1089249944851]
We propose an algorithm to search for discrimination patterns in a general class of probabilistic models, namely probabilistic circuits.
We also introduce new classes of patterns such as minimal, maximal, and optimal patterns that can effectively summarize exponentially many discrimination patterns.
arXiv Detail & Related papers (2022-12-05T18:36:45Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Learning Optimal Fair Classification Trees: Trade-offs Between
Interpretability, Fairness, and Accuracy [7.215903549622416]
We propose a mixed integer optimization framework for learning optimal classification trees.
We benchmark our method against state-of-the-art approaches for fair classification on popular datasets.
Our method consistently finds decisions with almost full parity, while other methods rarely do.
arXiv Detail & Related papers (2022-01-24T19:47:10Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.