Fair Performance Metric Elicitation
- URL: http://arxiv.org/abs/2006.12732v3
- Date: Tue, 3 Nov 2020 06:14:31 GMT
- Title: Fair Performance Metric Elicitation
- Authors: Gaurush Hiranandani, Harikrishna Narasimhan, Oluwasanmi Koyejo
- Abstract summary: We consider the choice of fairness metrics through the lens of metric elicitation.
We propose a novel strategy to elicit group-fair performance metrics for multiclass classification problems.
- Score: 29.785862520452955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What is a fair performance metric? We consider the choice of fairness metrics
through the lens of metric elicitation -- a principled framework for selecting
performance metrics that best reflect implicit preferences. The use of metric
elicitation enables a practitioner to tune the performance and fairness metrics
to the task, context, and population at hand. Specifically, we propose a novel
strategy to elicit group-fair performance metrics for multiclass classification
problems with multiple sensitive groups that also includes selecting the
trade-off between predictive performance and fairness violation. The proposed
elicitation strategy requires only relative preference feedback and is robust
to both finite sample and feedback noise.
Related papers
- Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Properties of Group Fairness Metrics for Rankings [4.479834103607384]
We perform a comparative analysis of existing group fairness metrics developed in the context of fair ranking.
We take an axiomatic approach whereby we design a set of thirteen properties for group fairness metrics.
We demonstrate that most of these metrics only satisfy a small subset of the proposed properties.
arXiv Detail & Related papers (2022-12-29T15:50:18Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Classification Performance Metric Elicitation and its Applications [5.5637552942511155]
Despite its practical interest, there is limited formal guidance on how to select metrics for machine learning applications.
This thesis outlines metric elicitation as a principled framework for selecting the performance metric that best reflects implicit user preferences.
arXiv Detail & Related papers (2022-08-19T03:57:17Z) - Experiments on Generalizability of User-Oriented Fairness in Recommender
Systems [2.0932879442844476]
A fairness-aware recommender system aims to treat different user groups similarly.
We propose a user-centered fairness re-ranking framework applied on top of a base ranking model.
We evaluate the final recommendations provided by the re-ranking framework from both user- (e.g., NDCG) and item-side (e.g., novelty, item-fairness) metrics.
arXiv Detail & Related papers (2022-05-17T12:36:30Z) - On the Intrinsic and Extrinsic Fairness Evaluation Metrics for
Contextualized Language Representations [74.70957445600936]
Multiple metrics have been introduced to measure fairness in various natural language processing tasks.
These metrics can be roughly categorized into two categories: 1) emphextrinsic metrics for evaluating fairness in downstream applications and 2) emphintrinsic metrics for estimating fairness in upstream language representation models.
arXiv Detail & Related papers (2022-03-25T22:17:43Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - ReMP: Rectified Metric Propagation for Few-Shot Learning [67.96021109377809]
A rectified metric space is learned to maintain the metric consistency from training to testing.
Numerous analyses indicate that a simple modification of the objective can yield substantial performance gains.
The proposed ReMP is effective and efficient, and outperforms the state of the arts on various standard few-shot learning datasets.
arXiv Detail & Related papers (2020-12-02T00:07:53Z) - Quadratic Metric Elicitation for Fairness and Beyond [28.1407078984806]
This paper develops a strategy for eliciting more flexible multiclass metrics defined by quadratic functions of rates.
We show its application in eliciting quadratic violation-based group-fair metrics.
arXiv Detail & Related papers (2020-11-03T07:22:15Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.