Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
- URL: http://arxiv.org/abs/2209.01215v1
- Date: Fri, 2 Sep 2022 06:15:15 GMT
- Title: Exploiting Fairness to Enhance Sensitive Attributes Reconstruction
- Authors: Julien Ferry (LAAS-ROC), Ulrich A\"ivodji (ETS), S\'ebastien Gambs
(UQAM), Marie-Jos\'e Huguet (LAAS-ROC), Mohamed Siala (LAAS-ROC)
- Abstract summary: In recent years, a growing body of work has emerged on how to learn machine learning models under fairness constraints.
We show that information about this model's fairness can be exploited by the adversary to enhance his reconstruction of the sensitive attributes of the training data.
We propose a generic reconstruction correction method, which takes as input an initial guess and corrects it to comply with some user-defined constraints.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, a growing body of work has emerged on how to learn machine
learning models under fairness constraints, often expressed with respect to
some sensitive attributes. In this work, we consider the setting in which an
adversary has black-box access to a target model and show that information
about this model's fairness can be exploited by the adversary to enhance his
reconstruction of the sensitive attributes of the training data. More
precisely, we propose a generic reconstruction correction method, which takes
as input an initial guess made by the adversary and corrects it to comply with
some user-defined constraints (such as the fairness information) while
minimizing the changes in the adversary's guess. The proposed method is
agnostic to the type of target model, the fairness-aware learning method as
well as the auxiliary knowledge of the adversary. To assess the applicability
of our approach, we have conducted a thorough experimental evaluation on two
state-of-the-art fair learning methods, using four different fairness metrics
with a wide range of tolerances and with three datasets of diverse sizes and
sensitive attributes. The experimental results demonstrate the effectiveness of
the proposed approach to improve the reconstruction of the sensitive attributes
of the training set.
Related papers
- Enhancing Fairness and Performance in Machine Learning Models: A Multi-Task Learning Approach with Monte-Carlo Dropout and Pareto Optimality [1.5498930424110338]
This study introduces an approach to mitigate bias in machine learning by leveraging model uncertainty.
Our approach utilizes a multi-task learning (MTL) framework combined with Monte Carlo (MC) Dropout to assess and mitigate uncertainty in predictions related to protected labels.
arXiv Detail & Related papers (2024-04-12T04:17:50Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Counterfactual Fair Opportunity: Measuring Decision Model Fairness with
Counterfactual Reasoning [5.626570248105078]
This work aims to unveil unfair model behaviors using counterfactual reasoning in the case of fairness under unawareness setting.
A counterfactual version of equal opportunity named counterfactual fair opportunity is defined and two novel metrics that analyze the sensitive information of counterfactual samples are introduced.
arXiv Detail & Related papers (2023-02-16T09:13:53Z) - FairIF: Boosting Fairness in Deep Learning via Influence Functions with
Validation Set Sensitive Attributes [51.02407217197623]
We propose a two-stage training algorithm named FAIRIF.
It minimizes the loss over the reweighted data set where the sample weights are computed.
We show that FAIRIF yields models with better fairness-utility trade-offs against various types of bias.
arXiv Detail & Related papers (2022-01-15T05:14:48Z) - Estimating and Improving Fairness with Adversarial Learning [65.99330614802388]
We propose an adversarial multi-task training strategy to simultaneously mitigate and detect bias in the deep learning-based medical image analysis system.
Specifically, we propose to add a discrimination module against bias and a critical module that predicts unfairness within the base classification model.
We evaluate our framework on a large-scale public-available skin lesion dataset.
arXiv Detail & Related papers (2021-03-07T03:10:32Z) - FAIR: Fair Adversarial Instance Re-weighting [0.7829352305480285]
We propose a Fair Adrial Instance Re-weighting (FAIR) method, which uses adversarial training to learn instance weighting function that ensures fair predictions.
To the best of our knowledge, this is the first model that merges reweighting and adversarial approaches by means of a weighting function that can provide interpretable information about fairness of individual instances.
arXiv Detail & Related papers (2020-11-15T10:48:56Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Spectrum-Guided Adversarial Disparity Learning [52.293230153385124]
We propose a novel end-to-end knowledge directed adversarial learning framework.
It portrays the class-conditioned intraclass disparity using two competitive encoding distributions and learns the purified latent codes by denoising learned disparity.
The experiments on four HAR benchmark datasets demonstrate the robustness and generalization of our proposed methods over a set of state-of-the-art.
arXiv Detail & Related papers (2020-07-14T05:46:27Z) - Learning Diverse Representations for Fast Adaptation to Distribution
Shift [78.83747601814669]
We present a method for learning multiple models, incorporating an objective that pressures each to learn a distinct way to solve the task.
We demonstrate our framework's ability to facilitate rapid adaptation to distribution shift.
arXiv Detail & Related papers (2020-06-12T12:23:50Z) - FairALM: Augmented Lagrangian Method for Training Fair Models with
Little Regret [42.66567001275493]
It is now accepted that because of biases in the datasets we present to the models, a fairness-oblivious training will lead to unfair models.
Here, we study mechanisms that impose fairness concurrently while training the model.
arXiv Detail & Related papers (2020-04-03T03:18:53Z) - Fairness-Aware Learning with Prejudice Free Representations [2.398608007786179]
We propose a novel algorithm that can effectively identify and treat latent discriminating features.
The approach helps to collect discrimination-free features that would improve the model performance.
arXiv Detail & Related papers (2020-02-26T10:06:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.