Insufficiently Justified Disparate Impact: A New Criterion for Subgroup
Fairness
- URL: http://arxiv.org/abs/2306.11181v1
- Date: Mon, 19 Jun 2023 22:10:24 GMT
- Title: Insufficiently Justified Disparate Impact: A New Criterion for Subgroup
Fairness
- Authors: Neil Menghani, Edward McFowland III, Daniel B. Neill
- Abstract summary: We develop a new criterion, "insufficiently justified disparate impact" (IJDI)
Our novel, utility-based IJDI criterion evaluates false positive and false negative error rate imbalances.
We describe a novel IJDI-Scan approach which can efficiently identify the intersectional subpopulations.
- Score: 1.9346186297861747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we develop a new criterion, "insufficiently justified
disparate impact" (IJDI), for assessing whether recommendations (binarized
predictions) made by an algorithmic decision support tool are fair. Our novel,
utility-based IJDI criterion evaluates false positive and false negative error
rate imbalances, identifying statistically significant disparities between
groups which are present even when adjusting for group-level differences in
base rates. We describe a novel IJDI-Scan approach which can efficiently
identify the intersectional subpopulations, defined across multiple observed
attributes of the data, with the most significant IJDI. To evaluate IJDI-Scan's
performance, we conduct experiments on both simulated and real-world data,
including recidivism risk assessment and credit scoring. Further, we implement
and evaluate approaches to mitigating IJDI for the detected subpopulations in
these domains.
Related papers
- Practical Improvements of A/B Testing with Off-Policy Estimation [51.25970890274447]
We introduce a family of unbiased off-policy estimators that achieves lower variance than the standard approach.<n>Our theoretical analysis and experimental results validate the effectiveness and practicality of the proposed method.
arXiv Detail & Related papers (2025-06-12T13:11:01Z) - Addressing Key Challenges of Adversarial Attacks and Defenses in the Tabular Domain: A Methodological Framework for Coherence and Consistency [26.645723217188323]
In this paper, we propose new evaluation criteria tailored for adversarial attacks in the tabular domain.
We also introduce a novel technique for perturbing dependent features while maintaining coherence and feature consistency within the sample.
The findings provide valuable insights on the strengths, limitations, and trade-offs of various adversarial attacks in the tabular domain.
arXiv Detail & Related papers (2024-12-10T09:17:09Z) - From Variability to Stability: Advancing RecSys Benchmarking Practices [3.3331198926331784]
This paper introduces a novel benchmarking methodology to facilitate a fair and robust comparison of RecSys algorithms.
By utilizing a diverse set of $30$ open datasets, including two introduced in this work, we critically examine the influence of dataset characteristics on algorithm performance.
arXiv Detail & Related papers (2024-02-15T07:35:52Z) - A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - GroupMixNorm Layer for Learning Fair Models [4.324785083027206]
This research proposes a novel in-processing based GroupMixNorm layer for mitigating bias from deep learning models.
The proposed method improves upon several fairness metrics with minimal impact on overall accuracy.
arXiv Detail & Related papers (2023-12-19T09:04:26Z) - Detecting Concept Drift for the reliability prediction of Software
Defects using Instance Interpretation [4.039245878626346]
Concept drift (CD) can occur due to changes in the software development process, the complexity of the software, or changes in user behavior.
We aim to develop a reliable JIT-SDP model using CD point detection directly by identifying changes in the interpretation of unlabeled simplified and resampled data.
arXiv Detail & Related papers (2023-05-06T07:50:12Z) - Improved Policy Evaluation for Randomized Trials of Algorithmic Resource
Allocation [54.72195809248172]
We present a new estimator leveraging our proposed novel concept, that involves retrospective reshuffling of participants across experimental arms at the end of an RCT.
We prove theoretically that such an estimator is more accurate than common estimators based on sample means.
arXiv Detail & Related papers (2023-02-06T05:17:22Z) - Systematic Evaluation of Predictive Fairness [60.0947291284978]
Mitigating bias in training on biased datasets is an important open problem.
We examine the performance of various debiasing methods across multiple tasks.
We find that data conditions have a strong influence on relative model performance.
arXiv Detail & Related papers (2022-10-17T05:40:13Z) - Performance Evaluation of Adversarial Attacks: Discrepancies and
Solutions [51.8695223602729]
adversarial attack methods have been developed to challenge the robustness of machine learning models.
We propose a Piece-wise Sampling Curving (PSC) toolkit to effectively address the discrepancy.
PSC toolkit offers options for balancing the computational cost and evaluation effectiveness.
arXiv Detail & Related papers (2021-04-22T14:36:51Z) - Multi-class Classification Based Anomaly Detection of Insider Activities [18.739091829480234]
We propose an approach that combines generative model with supervised learning to perform multi-class classification using deep learning.
The generative adversarial network (GAN) based insider detection model introduces Conditional Generative Adversarial Network (CGAN) to enrich minority class samples.
The comprehensive experiments performed on the benchmark dataset demonstrates the effectiveness of introducing GAN derived synthetic data.
arXiv Detail & Related papers (2021-02-15T00:08:39Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Uncertainty-aware Score Distribution Learning for Action Quality
Assessment [91.05846506274881]
We propose an uncertainty-aware score distribution learning (USDL) approach for action quality assessment (AQA)
Specifically, we regard an action as an instance associated with a score distribution, which describes the probability of different evaluated scores.
Under the circumstance where fine-grained score labels are available, we devise a multi-path uncertainty-aware score distributions learning (MUSDL) method to explore the disentangled components of a score.
arXiv Detail & Related papers (2020-06-13T15:41:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.