Generalizing Fairness: Discovery and Mitigation of Unknown Sensitive
Attributes
- URL: http://arxiv.org/abs/2107.13625v1
- Date: Wed, 28 Jul 2021 20:18:08 GMT
- Title: Generalizing Fairness: Discovery and Mitigation of Unknown Sensitive
Attributes
- Authors: William Paul, Philippe Burlina
- Abstract summary: This paper investigates methods that separate out individual semantic sensitive factors from a given dataset to conduct this characterization.
We also broaden remediation of fairness, which normally only addresses socially relevant factors, and widen it to deal with the desensitization of AI.
In experiments using the road sign (GTSRB) and facial imagery (CelebA) datasets, we show the promise of using this scheme.
- Score: 5.665283675533071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When deploying artificial intelligence (AI) in the real world, being able to
trust the operation of the AI by characterizing how it performs is an
ever-present and important topic. An important and still largely unexplored
task in this characterization is determining major factors within the real
world that affect the AI's behavior, such as weather conditions or lighting,
and either a) being able to give justification for why it may have failed or b)
eliminating the influence the factor has. Determining these sensitive factors
heavily relies on collected data that is diverse enough to cover numerous
combinations of these factors, which becomes more onerous when having many
potential sensitive factors or operating in complex environments. This paper
investigates methods that discover and separate out individual semantic
sensitive factors from a given dataset to conduct this characterization as well
as addressing mitigation of these factors' sensitivity. We also broaden
remediation of fairness, which normally only addresses socially relevant
factors, and widen it to deal with the desensitization of AI with regard to all
possible aspects of variation in the domain. The proposed methods which
discover these major factors reduce the potentially onerous demands of
collecting a sufficiently diverse dataset. In experiments using the road sign
(GTSRB) and facial imagery (CelebA) datasets, we show the promise of using this
scheme to perform this characterization and remediation and demonstrate that
our approach outperforms state of the art approaches.
Related papers
- Towards Understanding Human Emotional Fluctuations with Sparse Check-In Data [2.8623940003518156]
Data sparsity is a key challenge limiting the power of AI tools across various domains.
This paper proposes a novel probabilistic framework that integrates user-centric feedback-based learning.
It achieves 60% accuracy in predicting user states among 64 options.
arXiv Detail & Related papers (2024-09-10T21:00:33Z) - Decoding Susceptibility: Modeling Misbelief to Misinformation Through a Computational Approach [61.04606493712002]
Susceptibility to misinformation describes the degree of belief in unverifiable claims that is not observable.
Existing susceptibility studies heavily rely on self-reported beliefs.
We propose a computational approach to model users' latent susceptibility levels.
arXiv Detail & Related papers (2023-11-16T07:22:56Z) - C-Disentanglement: Discovering Causally-Independent Generative Factors
under an Inductive Bias of Confounder [35.09708249850816]
We introduce a framework entitled Confounded-Disentanglement (C-Disentanglement), the first framework that explicitly introduces the inductive bias of confounder.
We conduct extensive experiments on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-10-26T11:44:42Z) - Detection and Evaluation of bias-inducing Features in Machine learning [14.045499740240823]
In the context of machine learning (ML), one can use cause-to-effect analysis to understand the reason for the biased behavior of the system.
We propose an approach for systematically identifying all bias-inducing features of a model to help support the decision-making of domain experts.
arXiv Detail & Related papers (2023-10-19T15:01:16Z) - Understanding Robust Overfitting from the Feature Generalization Perspective [61.770805867606796]
Adversarial training (AT) constructs robust neural networks by incorporating adversarial perturbations into natural data.
It is plagued by the issue of robust overfitting (RO), which severely damages the model's robustness.
In this paper, we investigate RO from a novel feature generalization perspective.
arXiv Detail & Related papers (2023-10-01T07:57:03Z) - A Sequentially Fair Mechanism for Multiple Sensitive Attributes [0.46040036610482665]
In the standard use case of Algorithmic Fairness, the goal is to eliminate the relationship between a sensitive variable and a corresponding score.
We propose a sequential framework, which allows to progressively achieve fairness across a set of sensitive features.
Our approach seamlessly extends to approximate fairness, enveloping a framework accommodating the trade-off between risk and unfairness.
arXiv Detail & Related papers (2023-09-12T22:31:57Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline Performances [76.34037366117234]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - Interventional Causal Representation Learning [75.18055152115586]
Causal representation learning seeks to extract high-level latent factors from low-level sensory data.
Can interventional data facilitate causal representation learning?
We show that interventional data often carries geometric signatures of the latent factors' support.
arXiv Detail & Related papers (2022-09-24T04:59:03Z) - Trying to Outrun Causality with Machine Learning: Limitations of Model
Explainability Techniques for Identifying Predictive Variables [7.106986689736828]
We show that machine learning algorithms are not as flexible as they might seem, and are instead incredibly sensitive to the underling causal structure in the data.
We provide some alternative recommendations for researchers wanting to explore the data for important variables.
arXiv Detail & Related papers (2022-02-20T17:48:54Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - Counterfactual Explanations as Interventions in Latent Space [62.997667081978825]
Counterfactual explanations aim to provide to end users a set of features that need to be changed in order to achieve a desired outcome.
Current approaches rarely take into account the feasibility of actions needed to achieve the proposed explanations.
We present Counterfactual Explanations as Interventions in Latent Space (CEILS), a methodology to generate counterfactual explanations.
arXiv Detail & Related papers (2021-06-14T20:48:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.