SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
- URL: http://arxiv.org/abs/2006.14168v2
- Date: Thu, 1 Apr 2021 03:24:44 GMT
- Title: SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness
- Authors: Mikhail Yurochkin and Yuekai Sun
- Abstract summary: We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
- Score: 50.916483212900275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we cast fair machine learning as invariant machine learning.
We first formulate a version of individual fairness that enforces invariance on
certain sensitive sets. We then design a transport-based regularizer that
enforces this version of individual fairness and develop an algorithm to
minimize the regularizer efficiently. Our theoretical results guarantee the
proposed approach trains certifiably fair ML models. Finally, in the
experimental studies we demonstrate improved fairness metrics in comparison to
several recent fair training procedures on three ML tasks that are susceptible
to algorithmic bias.
Related papers
- Differentially Private Post-Processing for Fair Regression [13.855474876965557]
Our algorithm can be applied to post-process any given regressor to improve fairness by remapping its outputs.
We analyze the sample complexity of our algorithm and provide fairness guarantee, revealing a trade-off between the statistical bias and variance induced from the choice of the number of bins in the histogram.
arXiv Detail & Related papers (2024-05-07T06:09:37Z) - Probabilistic Contrastive Learning for Long-Tailed Visual Recognition [78.70453964041718]
Longtailed distributions frequently emerge in real-world data, where a large number of minority categories contain a limited number of samples.
Recent investigations have revealed that supervised contrastive learning exhibits promising potential in alleviating the data imbalance.
We propose a novel probabilistic contrastive (ProCo) learning algorithm that estimates the data distribution of the samples from each class in the feature space.
arXiv Detail & Related papers (2024-03-11T13:44:49Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Achieving Fairness at No Utility Cost via Data Reweighing with Influence [27.31236521189165]
We propose a data reweighing approach that only adjusts the weight for samples in the training phase.
We granularly model the influence of each training sample with regard to fairness-related quantity and predictive utility.
Our approach can empirically release the tradeoff and obtain cost-free fairness for equal opportunity.
arXiv Detail & Related papers (2022-02-01T22:12:17Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Revisiting Consistency Regularization for Semi-Supervised Learning [80.28461584135967]
We propose an improved consistency regularization framework by a simple yet effective technique, FeatDistLoss.
Experimental results show that our model defines a new state of the art for various datasets and settings.
arXiv Detail & Related papers (2021-12-10T20:46:13Z) - Ensuring Fairness Beyond the Training Data [22.284777913437182]
We develop classifiers that are fair with respect to the training distribution and for a class of perturbations.
Based on online learning algorithm, we develop an iterative algorithm that converges to a fair and robust solution.
Our experiments show that there is an inherent trade-off between fairness and accuracy of such classifiers.
arXiv Detail & Related papers (2020-07-12T16:20:28Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.