Analysing Fairness of Privacy-Utility Mobility Models
- URL: http://arxiv.org/abs/2304.06469v1
- Date: Mon, 10 Apr 2023 11:09:18 GMT
- Title: Analysing Fairness of Privacy-Utility Mobility Models
- Authors: Yuting Zhan, Hamed Haddadi, Afra Mashhadi
- Abstract summary: This work defines a set of fairness metrics designed explicitly for human mobility.
We examine the fairness of two state-of-the-art privacy-preserving models that rely on GAN and representation learning to reduce the re-identification rate of users for data sharing.
Our results show that while both models guarantee group fairness in terms of demographic parity, they violate individual fairness criteria, indicating that users with highly similar trajectories receive disparate privacy gain.
- Score: 11.387235721659378
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Preserving the individuals' privacy in sharing spatial-temporal datasets is
critical to prevent re-identification attacks based on unique trajectories.
Existing privacy techniques tend to propose ideal privacy-utility tradeoffs,
however, largely ignore the fairness implications of mobility models and
whether such techniques perform equally for different groups of users. The
quantification between fairness and privacy-aware models is still unclear and
there barely exists any defined sets of metrics for measuring fairness in the
spatial-temporal context. In this work, we define a set of fairness metrics
designed explicitly for human mobility, based on structural similarity and
entropy of the trajectories. Under these definitions, we examine the fairness
of two state-of-the-art privacy-preserving models that rely on GAN and
representation learning to reduce the re-identification rate of users for data
sharing. Our results show that while both models guarantee group fairness in
terms of demographic parity, they violate individual fairness criteria,
indicating that users with highly similar trajectories receive disparate
privacy gain. We conclude that the tension between the re-identification task
and individual fairness needs to be considered for future spatial-temporal data
analysis and modelling to achieve a privacy-preserving fairness-aware setting.
Related papers
- Conditional Density Estimations from Privacy-Protected Data [0.0]
We propose simulation-based inference methods from privacy-protected datasets.
We illustrate our methods on discrete time-series data under an infectious disease model and with ordinary linear regression models.
arXiv Detail & Related papers (2023-10-19T14:34:17Z) - Advancing Personalized Federated Learning: Group Privacy, Fairness, and
Beyond [6.731000738818571]
Federated learning (FL) is a framework for training machine learning models in a distributed and collaborative manner.
In this paper, we address the triadic interaction among personalization, privacy guarantees, and fairness attained by models trained within the FL framework.
A method is put forth that introduces group privacy assurances through the utilization of $d$-privacy.
arXiv Detail & Related papers (2023-09-01T12:20:19Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Improved Generalization Guarantees in Restricted Data Models [16.193776814471768]
Differential privacy is known to protect against threats to validity incurred due to adaptive, or exploratory, data analysis.
We show that, under this assumption, it is possible to "re-use" privacy budget on different portions of the data, significantly improving accuracy without increasing the risk of overfitting.
arXiv Detail & Related papers (2022-07-20T16:04:12Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - Group privacy for personalized federated learning [4.30484058393522]
Federated learning is a type of collaborative machine learning, where participating clients process their data locally, sharing only updates to the collaborative model.
We propose a method to provide group privacy guarantees exploiting some key properties of $d$-privacy.
arXiv Detail & Related papers (2022-06-07T15:43:45Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Hide-and-Seek Privacy Challenge [88.49671206936259]
The NeurIPS 2020 Hide-and-Seek Privacy Challenge is a novel two-tracked competition to accelerate progress in tackling both problems.
In our head-to-head format, participants in the synthetic data generation track (i.e. "hiders") and the patient re-identification track (i.e. "seekers") are directly pitted against each other by way of a new, high-quality intensive care time-series dataset.
arXiv Detail & Related papers (2020-07-23T15:50:59Z) - A Critical Overview of Privacy-Preserving Approaches for Collaborative
Forecasting [0.0]
Cooperation between different data owners may lead to an improvement in forecast quality.
Due to business competitive factors and personal data protection questions, said data owners might be unwilling to share their data.
This paper analyses the state-of-the-art and unveils several shortcomings of existing methods in guaranteeing data privacy.
arXiv Detail & Related papers (2020-04-20T20:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.