Multiple Attribute Fairness: Application to Fraud Detection
- URL: http://arxiv.org/abs/2207.14355v1
- Date: Thu, 28 Jul 2022 19:19:45 GMT
- Title: Multiple Attribute Fairness: Application to Fraud Detection
- Authors: Meghanath Macha Y, Sriram Ravindran, Deepak Pai, Anish Narang, Vijay
Srivastava
- Abstract summary: We propose a fairness measure relaxing the equality conditions in the popular equal odds fairness regime for classification.
We design an iterative-agnostic, grid-based model that calibrates the outcomes per sensitive attribute value to conform to the measure.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a fairness measure relaxing the equality conditions in the popular
equal odds fairness regime for classification. We design an iterative,
model-agnostic, grid-based heuristic that calibrates the outcomes per sensitive
attribute value to conform to the measure. The heuristic is designed to handle
high arity attribute values and performs a per attribute sanitization of
outcomes across different protected attribute values. We also extend our
heuristic for multiple attributes. Highlighting our motivating application,
fraud detection, we show that the proposed heuristic is able to achieve
fairness across multiple values of a single protected attribute, multiple
protected attributes. When compared to current fairness techniques, that focus
on two groups, we achieve comparable performance across several public data
sets.
Related papers
- Attribute-Aware Deep Hashing with Self-Consistency for Large-Scale
Fine-Grained Image Retrieval [65.43522019468976]
We propose attribute-aware hashing networks with self-consistency for generating attribute-aware hash codes.
We develop an encoder-decoder structure network of a reconstruction task to unsupervisedly distill high-level attribute-specific vectors.
Our models are equipped with a feature decorrelation constraint upon these attribute vectors to strengthen their representative abilities.
arXiv Detail & Related papers (2023-11-21T08:20:38Z) - Fairness Improvement with Multiple Protected Attributes: How Far Are We? [27.86281559193479]
This paper conducts an extensive study of fairness improvement regarding multiple protected attributes.
We analyze the effectiveness of 11 state-of-the-art fairness improvement methods.
We find little difference in accuracy loss when considering single and multiple protected attributes.
arXiv Detail & Related papers (2023-07-25T14:01:23Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Dikaios: Privacy Auditing of Algorithmic Fairness via Attribute
Inference Attacks [0.5801044612920815]
We propose Dikaios, a privacy auditing tool for fairness algorithms for model builders.
We show that our attribute inference attacks with adaptive prediction threshold significantly outperform prior attacks.
arXiv Detail & Related papers (2022-02-04T17:19:59Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm [33.145522561104464]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
arXiv Detail & Related papers (2020-09-09T17:40:24Z) - Attribute Mix: Semantic Data Augmentation for Fine Grained Recognition [102.45926816660665]
We propose Attribute Mix, a data augmentation strategy at attribute level to expand the fine-grained samples.
The principle lies in that attribute features are shared among fine-grained sub-categories, and can be seamlessly transferred among images.
arXiv Detail & Related papers (2020-04-06T14:06:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.