Fair Outlier Detection
- URL: http://arxiv.org/abs/2005.09900v2
- Date: Tue, 4 Aug 2020 20:18:41 GMT
- Title: Fair Outlier Detection
- Authors: Deepak P and Savitha Sam Abraham
- Abstract summary: We consider the task of fair outlier detection over multiple multi-valued sensitive attributes.
We propose a fair outlier detection method, FairLOF, that is inspired by the popular LOF formulation for neighborhood-based outlier detection.
- Score: 5.320087179174425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An outlier detection method may be considered fair over specified sensitive
attributes if the results of outlier detection are not skewed towards
particular groups defined on such sensitive attributes. In this task, we
consider, for the first time to our best knowledge, the task of fair outlier
detection. In this work, we consider the task of fair outlier detection over
multiple multi-valued sensitive attributes (e.g., gender, race, religion,
nationality, marital status etc.). We propose a fair outlier detection method,
FairLOF, that is inspired by the popular LOF formulation for neighborhood-based
outlier detection. We outline ways in which unfairness could be induced within
LOF and develop three heuristic principles to enhance fairness, which form the
basis of the FairLOF method. Being a novel task, we develop an evaluation
framework for fair outlier detection, and use that to benchmark FairLOF on
quality and fairness of results. Through an extensive empirical evaluation over
real-world datasets, we illustrate that FairLOF is able to achieve significant
improvements in fairness at sometimes marginal degradations on result quality
as measured against the fairness-agnostic LOF method.
Related papers
- Targeted Learning for Data Fairness [52.59573714151884]
We expand fairness inference by evaluating fairness in the data generating process itself.
We derive estimators demographic parity, equal opportunity, and conditional mutual information.
To validate our approach, we perform several simulations and apply our estimators to real data.
arXiv Detail & Related papers (2025-02-06T18:51:28Z) - Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Preserving Fairness Generalization in Deepfake Detection [14.485069525871504]
Deepfake detection models can result in unfair performance disparities among demographic groups, such as race and gender.
We propose the first method to address the fairness generalization problem in deepfake detection by simultaneously considering features, loss, and optimization aspects.
Our method employs disentanglement learning to extract demographic and domain-agnostic features, fusing them to encourage fair learning across a flattened loss landscape.
arXiv Detail & Related papers (2024-02-27T05:47:33Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - FAL-CUR: Fair Active Learning using Uncertainty and Representativeness
on Fair Clustering [16.808400593594435]
We propose a novel strategy, named Fair Active Learning using fair Clustering, Uncertainty, and Representativeness (FAL-CUR)
FAL-CUR achieves a 15% - 20% improvement in fairness compared to the best state-of-the-art method in terms of equalized odds.
An ablation study highlights the crucial roles of fair clustering in preserving fairness and the acquisition function in stabilizing the accuracy performance.
arXiv Detail & Related papers (2022-09-21T08:28:43Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Deep Clustering based Fair Outlier Detection [19.601280507914325]
We propose an instance-level weighted representation learning strategy to enhance the joint deep clustering and outlier detection.
Our DCFOD method consistently achieves superior performance on both the outlier detection validity and two types of fairness notions in outlier detection.
arXiv Detail & Related papers (2021-06-09T15:12:26Z) - Fairness-aware Outlier Ensemble [30.0516419408149]
Outlier ensemble methods have shown outstanding performance on the discovery of instances that are significantly different from the majority of the data.
Without the awareness of fairness, their applicability in the ethical scenarios, such as fraud detection and judiciary judgement system, could be degraded.
We propose to reduce the bias of the outlier ensemble results through a fairness-aware ensemble framework.
arXiv Detail & Related papers (2021-03-17T03:21:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.