Approximating Discrimination Within Models When Faced With Several Non-Binary Sensitive Attributes
- URL: http://arxiv.org/abs/2408.06099v1
- Date: Mon, 12 Aug 2024 12:30:48 GMT
- Title: Approximating Discrimination Within Models When Faced With Several Non-Binary Sensitive Attributes
- Authors: Yijun Bian, Yujie Luo, Ping Xu,
- Abstract summary: We propose a fairness measure based on distances between sets from a manifold perspective.
It can deal with a fine-grained discrimination evaluation for several sensitive attributes of multiple values.
We also propose two approximation algorithms to accelerate the computation of distances of sets.
- Score: 4.731404257629232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Discrimination mitigation with machine learning (ML) models could be complicated because multiple factors may interweave with each other including hierarchically and historically. Yet few existing fairness measures are able to capture the discrimination level within ML models in the face of multiple sensitive attributes. To bridge this gap, we propose a fairness measure based on distances between sets from a manifold perspective, named as 'harmonic fairness measure via manifolds (HFM)' with two optional versions, which can deal with a fine-grained discrimination evaluation for several sensitive attributes of multiple values. To accelerate the computation of distances of sets, we further propose two approximation algorithms named 'Approximation of distance between sets for one sensitive attribute with multiple values (ApproxDist)' and 'Approximation of extended distance between sets for several sensitive attributes with multiple values (ExtendDist)' to respectively resolve bias evaluation of one single sensitive attribute with multiple values and that of several sensitive attributes with multiple values. Moreover, we provide an algorithmic effectiveness analysis for ApproxDist under certain assumptions to explain how well it could work. The empirical results demonstrate that our proposed fairness measure HFM is valid and approximation algorithms (i.e., ApproxDist and ExtendDist) are effective and efficient.
Related papers
- Does Machine Bring in Extra Bias in Learning? Approximating Fairness in Models Promptly [2.002741592555996]
Existing techniques for assessing the discrimination level of machine learning models include commonly used group and individual fairness measures.
We propose a "harmonic fairness measure via manifold (HFM)" based on distances between sets.
Empirical results indicate that the proposed fairness measure HFM is valid and that the proposed ApproxDist is effective and efficient.
arXiv Detail & Related papers (2024-05-15T11:07:40Z) - Learning Feature Inversion for Multi-class Anomaly Detection under General-purpose COCO-AD Benchmark [101.23684938489413]
Anomaly detection (AD) is often focused on detecting anomalies for industrial quality inspection and medical lesion examination.
This work first constructs a large-scale and general-purpose COCO-AD dataset by extending COCO to the AD field.
Inspired by the metrics in the segmentation field, we propose several more practical threshold-dependent AD-specific metrics.
arXiv Detail & Related papers (2024-04-16T17:38:26Z) - Multi-Class Anomaly Detection based on Regularized Discriminative
Coupled hypersphere-based Feature Adaptation [85.15324009378344]
This paper introduces a new model by including class discriminative properties obtained by a modified Regularized Discriminative Variational Auto-Encoder (RD-VAE) in the feature extraction process.
The proposed Regularized Discriminative Coupled-hypersphere-based Feature Adaptation (RD-CFA) forms a solution for multi-class anomaly detection.
arXiv Detail & Related papers (2023-11-24T14:26:07Z) - A Sequentially Fair Mechanism for Multiple Sensitive Attributes [0.46040036610482665]
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score.
We propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features.
Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness.
arXiv Detail & Related papers (2023-09-12T22:31:57Z) - Robust Domain Adaptive Object Detection with Unified Multi-Granularity Alignment [59.831917206058435]
Domain adaptive detection aims to improve the generalization of detectors on target domain.
Recent approaches achieve domain adaption through feature alignment in different granularities via adversarial learning.
We introduce a unified multi-granularity alignment (MGA)-based detection framework for domain-invariant feature learning.
arXiv Detail & Related papers (2023-01-01T08:38:07Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Few-Shot Fine-Grained Action Recognition via Bidirectional Attention and
Contrastive Meta-Learning [51.03781020616402]
Fine-grained action recognition is attracting increasing attention due to the emerging demand of specific action understanding in real-world applications.
We propose a few-shot fine-grained action recognition problem, aiming to recognize novel fine-grained actions with only few samples given for each class.
Although progress has been made in coarse-grained actions, existing few-shot recognition methods encounter two issues handling fine-grained actions.
arXiv Detail & Related papers (2021-08-15T02:21:01Z) - Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm [33.145522561104464]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
arXiv Detail & Related papers (2020-09-09T17:40:24Z) - Rethink Maximum Mean Discrepancy for Domain Adaptation [77.2560592127872]
This paper theoretically proves two essential facts: 1) minimizing the Maximum Mean Discrepancy equals to maximize the source and target intra-class distances respectively but jointly minimize their variance with some implicit weights, so that the feature discriminability degrades.
Experiments on several benchmark datasets not only prove the validity of theoretical results but also demonstrate that our approach could perform better than the comparative state-of-art methods substantially.
arXiv Detail & Related papers (2020-07-01T18:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.