Priority-based Post-Processing Bias Mitigation for Individual and Group
Fairness
- URL: http://arxiv.org/abs/2102.00417v1
- Date: Sun, 31 Jan 2021 09:25:28 GMT
- Title: Priority-based Post-Processing Bias Mitigation for Individual and Group
Fairness
- Authors: Pranay Lohia
- Abstract summary: We propose a priority-based post-processing bias mitigation on both group and individual fairness.
Our novel framework establishes it by using a user segmentation algorithm to capture the consumption strategy better.
It upholds the notion of fair tariff allotment to the entire population taken into consideration without modifying the in-built process for tariff calculation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Previous post-processing bias mitigation algorithms on both group and
individual fairness don't work on regression models and datasets with
multi-class numerical labels. We propose a priority-based post-processing bias
mitigation on both group and individual fairness with the notion that similar
individuals should get similar outcomes irrespective of socio-economic factors
and more the unfairness, more the injustice. We establish this proposition by a
case study on tariff allotment in a smart grid. Our novel framework establishes
it by using a user segmentation algorithm to capture the consumption strategy
better. This process ensures priority-based fair pricing for group and
individual facing the maximum injustice. It upholds the notion of fair tariff
allotment to the entire population taken into consideration without modifying
the in-built process for tariff calculation. We also validate our method and
show superior performance to previous work on a real-world dataset in criminal
sentencing.
Related papers
- Cost Efficient Fairness Audit Under Partial Feedback [14.57835291220813]
We study the problem of auditing the fairness of a given classifier under partial feedback.<n>We introduce a novel cost model for acquiring additional labeled data.<n>We show that our algorithms consistently outperform natural baselines by around 50% in terms of audit cost.
arXiv Detail & Related papers (2025-10-04T08:38:03Z) - Fairness for the People, by the People: Minority Collective Action [50.29077265863936]
Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups.<n>We propose a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process.<n>Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.
arXiv Detail & Related papers (2025-08-21T09:09:39Z) - hyperFA*IR: A hypergeometric approach to fair rankings with finite candidate pool [0.0]
We present hyperFA*IR, a framework for assessing and enforcing fairness in rankings drawn from a finite set of candidates.<n>It relies on a generative process based on the hypergeometric distribution, which models real-world scenarios by sampling without replacement from fixed group sizes.<n>We also propose a Monte Carlo-based algorithm that efficiently detects unfair rankings by avoiding computationally expensive parameter tuning.
arXiv Detail & Related papers (2025-06-17T09:45:08Z) - Equitable Federated Learning with Activation Clustering [5.116582735311639]
Federated learning is a prominent distributed learning paradigm that incorporates collaboration among diverse clients.
We propose an equitable clustering-based framework where the clients are categorized/clustered based on how similar they are to each other.
arXiv Detail & Related papers (2024-10-24T23:36:39Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Optimal Group Fair Classifiers from Linear Post-Processing [10.615965454674901]
We propose a post-processing algorithm for fair classification that mitigates model bias under a unified family of group fairness criteria.
It achieves fairness by re-calibrating the output score of the given base model with a "fairness cost" -- a linear combination of the (predicted) group memberships.
arXiv Detail & Related papers (2024-05-07T05:58:44Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Boosting Fair Classifier Generalization through Adaptive Priority Reweighing [59.801444556074394]
A performance-promising fair algorithm with better generalizability is needed.
This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability.
arXiv Detail & Related papers (2023-09-15T13:04:55Z) - Algorithmic Fairness and Vertical Equity: Income Fairness with IRS Tax
Audit Models [73.24381010980606]
This study examines issues of algorithmic fairness in the context of systems that inform tax audit selection by the IRS.
We show how the use of more flexible machine learning methods for selecting audits may affect vertical equity.
Our results have implications for the design of algorithmic tools across the public sector.
arXiv Detail & Related papers (2022-06-20T16:27:06Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - On the Fairness of Causal Algorithmic Recourse [36.519629650529666]
We propose two new fairness criteria at the group and individual level.
We show that fairness of recourse is complementary to fairness of prediction.
We discuss whether fairness violations in the data generating process revealed by our criteria may be better addressed by societal interventions.
arXiv Detail & Related papers (2020-10-13T16:35:06Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.