Interpretable Assessment of Fairness During Model Evaluation
- URL: http://arxiv.org/abs/2010.13782v1
- Date: Mon, 26 Oct 2020 02:31:17 GMT
- Title: Interpretable Assessment of Fairness During Model Evaluation
- Authors: Amir Sepehri and Cyrus DiCiccio
- Abstract summary: We introduce a novel hierarchical clustering algorithm to detect heterogeneity among users in given sets of sub-populations.
We demonstrate the performance of the algorithm on real data from LinkedIn.
- Score: 1.2183405753834562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For companies developing products or algorithms, it is important to
understand the potential effects not only globally, but also on sub-populations
of users. In particular, it is important to detect if there are certain groups
of users that are impacted differently compared to others with regard to
business metrics or for whom a model treats unequally along fairness concerns.
In this paper, we introduce a novel hierarchical clustering algorithm to detect
heterogeneity among users in given sets of sub-populations with respect to any
specified notion of group similarity. We prove statistical guarantees about the
output and provide interpretable results. We demonstrate the performance of the
algorithm on real data from LinkedIn.
Related papers
- A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Estimating Structural Disparities for Face Models [54.062512989859265]
In machine learning, disparity metrics are often defined by measuring the difference in the performance or outcome of a model, across different sub-populations.
We explore performing such analysis on computer vision models trained on human faces, and on tasks such as face attribute prediction and affect estimation.
arXiv Detail & Related papers (2022-04-13T05:30:53Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Deep Clustering based Fair Outlier Detection [19.601280507914325]
We propose an instance-level weighted representation learning strategy to enhance the joint deep clustering and outlier detection.
Our DCFOD method consistently achieves superior performance on both the outlier detection validity and two types of fairness notions in outlier detection.
arXiv Detail & Related papers (2021-06-09T15:12:26Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Protecting Individual Interests across Clusters: Spectral Clustering
with Guarantees [20.350342151402963]
We propose an individual fairness criterion for clustering a graph $mathcalG$ that requires each cluster to contain an adequate number of members connected to the individual.
We devise a spectral clustering algorithm to find fair clusters under a given representation graph.
arXiv Detail & Related papers (2021-05-08T15:03:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.