Addressing Strategic Manipulation Disparities in Fair Classification
- URL: http://arxiv.org/abs/2205.10842v2
- Date: Fri, 15 Sep 2023 19:12:53 GMT
- Title: Addressing Strategic Manipulation Disparities in Fair Classification
- Authors: Vijay Keswani and L. Elisa Celis
- Abstract summary: We show that individuals from minority groups often pay a higher cost to update their features.
We propose a constrained optimization framework that constructs classifiers that lower the strategic manipulation cost for minority groups.
Empirically, we show the efficacy of this approach over multiple real-world datasets.
- Score: 15.032416453073086
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In real-world classification settings, such as loan application evaluation or
content moderation on online platforms, individuals respond to classifier
predictions by strategically updating their features to increase their
likelihood of receiving a particular (positive) decision (at a certain cost).
Yet, when different demographic groups have different feature distributions or
pay different update costs, prior work has shown that individuals from minority
groups often pay a higher cost to update their features. Fair classification
aims to address such classifier performance disparities by constraining the
classifiers to satisfy statistical fairness properties. However, we show that
standard fairness constraints do not guarantee that the constrained classifier
reduces the disparity in strategic manipulation cost. To address such biases in
strategic settings and provide equal opportunities for strategic manipulation,
we propose a constrained optimization framework that constructs classifiers
that lower the strategic manipulation cost for minority groups. We develop our
framework by studying theoretical connections between group-specific strategic
cost disparity and standard selection rate fairness metrics (e.g., statistical
rate and true positive rate). Empirically, we show the efficacy of this
approach over multiple real-world datasets.
Related papers
- Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - OptiGrad: A Fair and more Efficient Price Elasticity Optimization via a Gradient Based Learning [7.145413681946911]
This paper presents a novel approach to optimizing profit margins in non-life insurance markets through a gradient descent-based method.
It targets three key objectives: 1) maximizing profit margins, 2) ensuring conversion rates, and 3) enforcing fairness criteria such as demographic parity (DP)
arXiv Detail & Related papers (2024-04-16T04:21:59Z) - Off-Policy Evaluation for Large Action Spaces via Policy Convolution [60.6953713877886]
Policy Convolution family of estimators uses latent structure within actions to strategically convolve the logging and target policies.
Experiments on synthetic and benchmark datasets demonstrate remarkable mean squared error (MSE) improvements when using PC.
arXiv Detail & Related papers (2023-10-24T01:00:01Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Generalized Strategic Classification and the Case of Aligned Incentives [16.607142366834015]
We argue for a broader perspective on what accounts for strategic user behavior.
Our model subsumes most current models, but includes other novel settings.
We show how our results and approach can extend to the most general case.
arXiv Detail & Related papers (2022-02-09T09:36:09Z) - Selecting the suitable resampling strategy for imbalanced data
classification regarding dataset properties [62.997667081978825]
In many application domains such as medicine, information retrieval, cybersecurity, social media, etc., datasets used for inducing classification models often have an unequal distribution of the instances of each class.
This situation, known as imbalanced data classification, causes low predictive performance for the minority class examples.
Oversampling and undersampling techniques are well-known strategies to deal with this problem by balancing the number of examples of each class.
arXiv Detail & Related papers (2021-12-15T18:56:39Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Learning Strategies in Decentralized Matching Markets under Uncertain
Preferences [91.3755431537592]
We study the problem of decision-making in the setting of a scarcity of shared resources when the preferences of agents are unknown a priori.
Our approach is based on the representation of preferences in a reproducing kernel Hilbert space.
We derive optimal strategies that maximize agents' expected payoffs.
arXiv Detail & Related papers (2020-10-29T03:08:22Z) - The foundations of cost-sensitive causal classification [3.7493611543472953]
This study integrates cost-sensitive and causal classification by elaborating a unifying evaluation framework.
We prove that conventional classification is a specific case of causal classification in terms of a range of performance measures.
The proposed framework paves the way toward the development of cost-sensitive causal learning methods.
arXiv Detail & Related papers (2020-07-24T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.