Towards Auditing Unsupervised Learning Algorithms and Human Processes
For Fairness
- URL: http://arxiv.org/abs/2209.11762v1
- Date: Tue, 20 Sep 2022 12:30:05 GMT
- Title: Towards Auditing Unsupervised Learning Algorithms and Human Processes
For Fairness
- Authors: Ian Davidson and S. S. Ravi
- Abstract summary: Existing work has explored the two group classification problem for binary protected status variables using standard definitions of statistical parity.
Here we build upon the area of auditing by exploring the multi-group setting under more complex definitions of fairness.
- Score: 39.77120515767369
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing work on fairness typically focuses on making known machine learning
algorithms fairer. Fair variants of classification, clustering, outlier
detection and other styles of algorithms exist. However, an understudied area
is the topic of auditing an algorithm's output to determine fairness. Existing
work has explored the two group classification problem for binary protected
status variables using standard definitions of statistical parity. Here we
build upon the area of auditing by exploring the multi-group setting under more
complex definitions of fairness.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - Differentially Private Fair Binary Classifications [1.8087157239832476]
We first propose an algorithm for learning a classifier with only fairness guarantee.
We then refine this algorithm to incorporate differential privacy.
Empirical evaluations conducted on the Adult and Credit Card datasets illustrate that our algorithm outperforms the state-of-the-art in terms of fairness guarantees.
arXiv Detail & Related papers (2024-02-23T20:52:59Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Detection of Groups with Biased Representation in Ranking [28.095668425175564]
We study the problem of detecting groups with biased representation in the top-$k$ ranked items.
We propose efficient search algorithms for two different fairness measures.
arXiv Detail & Related papers (2022-12-30T10:50:02Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Learning to Generate Fair Clusters from Demonstrations [27.423983748614198]
We show how to identify the intended fairness constraint for a problem based on limited demonstrations from an expert.
We present an algorithm to identify the fairness metric from demonstrations and generate clusters using existing off-the-shelf clustering techniques.
We investigate how to generate interpretable solutions using our approach.
arXiv Detail & Related papers (2021-02-08T03:09:33Z) - Fair Hierarchical Clustering [92.03780518164108]
We define a notion of fairness that mitigates over-representation in traditional clustering.
We show that our algorithms can find a fair hierarchical clustering, with only a negligible loss in the objective.
arXiv Detail & Related papers (2020-06-18T01:05:11Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Fair Algorithms for Hierarchical Agglomerative Clustering [17.66340013352806]
Hierarchical Agglomerative Clustering (HAC) algorithms are extensively utilized in modern data science.
It is imperative to ensure that these algorithms are fair -- even if the dataset contains biases against certain protected groups.
We propose fair algorithms for performing HAC that enforce fairness constraints.
arXiv Detail & Related papers (2020-05-07T01:41:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.