Genetic programming approaches to learning fair classifiers
- URL: http://arxiv.org/abs/2004.13282v1
- Date: Tue, 28 Apr 2020 04:20:25 GMT
- Title: Genetic programming approaches to learning fair classifiers
- Authors: William La Cava and Jason H. Moore
- Abstract summary: We discuss current approaches to fairness and motivate proposals that incorporate fairness into genetic programming for classification.
The first is to incorporate a fairness objective into multi-objective optimization.
The second is to adapt lexicase selection to define cases dynamically over intersections of protected groups.
- Score: 4.901632310846025
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Society has come to rely on algorithms like classifiers for important
decision making, giving rise to the need for ethical guarantees such as
fairness. Fairness is typically defined by asking that some statistic of a
classifier be approximately equal over protected groups within a population. In
this paper, current approaches to fairness are discussed and used to motivate
algorithmic proposals that incorporate fairness into genetic programming for
classification. We propose two ideas. The first is to incorporate a fairness
objective into multi-objective optimization. The second is to adapt lexicase
selection to define cases dynamically over intersections of protected groups.
We describe why lexicase selection is well suited to pressure models to perform
well across the potentially infinitely many subgroups over which fairness is
desired. We use a recent genetic programming approach to construct models on
four datasets for which fairness constraints are necessary, and empirically
compare performance to prior methods utilizing game-theoretic solutions.
Methods are assessed based on their ability to generate trade-offs of subgroup
fairness and accuracy that are Pareto optimal. The result show that genetic
programming methods in general, and random search in particular, are well
suited to this task.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Enforcing Group Fairness in Algorithmic Decision Making: Utility
Maximization Under Sufficiency [0.0]
This paper focuses on the fairness concepts of PPV parity, false omission rate (FOR) parity, and sufficiency.
We show that group-specific threshold rules are optimal for PPV parity and FOR parity.
We also provide a solution for the optimal decision rules satisfying the fairness constraint sufficiency.
arXiv Detail & Related papers (2022-06-05T18:47:34Z) - Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm [33.145522561104464]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
arXiv Detail & Related papers (2020-09-09T17:40:24Z) - Transparency Tools for Fairness in AI (Luskin) [12.158766675246337]
We propose new tools for assessing and correcting fairness and bias in AI algorithms.
The three tools are: - A new definition of fairness called "controlled fairness" with respect to choices of protected features and filters.
They are useful for understanding various dimensions of bias, and that in practice the algorithms are effective in starkly reducing a given observed bias when tested on new data.
arXiv Detail & Related papers (2020-07-09T00:21:54Z) - Fairness with Overlapping Groups [15.154984899546333]
A standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
We reconsider this standard fair classification problem using a probabilistic population analysis.
Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures.
arXiv Detail & Related papers (2020-06-24T05:01:10Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Fair Classification via Unconstrained Optimization [0.0]
We show that the Bayes optimal fair learning rule remains a group-wise thresholding rule over the Bayes regressor.
The proposed algorithm can be applied to any black-box machine learning model.
arXiv Detail & Related papers (2020-05-21T11:29:05Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.