Models and Mechanisms for Fairness in Location Data Processing
- URL: http://arxiv.org/abs/2204.01880v1
- Date: Mon, 4 Apr 2022 22:57:16 GMT
- Title: Models and Mechanisms for Fairness in Location Data Processing
- Authors: Sina Shaham, Gabriel Ghinita, Cyrus Shahabi
- Abstract summary: Location data use has become pervasive in the last decade due to the advent of mobile apps, as well as novel areas such as smart health, smart cities, etc.
At the same time, significant concerns have surfaced with respect to fairness in data processing.
In this paper, we adapt existing fairness models to suit the specific properties of location data and spatial processing.
- Score: 6.640563753223598
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Location data use has become pervasive in the last decade due to the advent
of mobile apps, as well as novel areas such as smart health, smart cities, etc.
At the same time, significant concerns have surfaced with respect to fairness
in data processing. Individuals from certain population segments may be
unfairly treated when being considered for loan or job applications, access to
public resources, or other types of services. In the case of location data,
fairness is an important concern, given that an individual's whereabouts are
often correlated with sensitive attributes, e.g., race, income, education.
While fairness has received significant attention recently, e.g., in the case
of machine learning, there is little focus on the challenges of achieving
fairness when dealing with location data. Due to their characteristics and
specific type of processing algorithms, location data pose important fairness
challenges that must be addressed in a comprehensive and effective manner. In
this paper, we adapt existing fairness models to suit the specific properties
of location data and spatial processing. We focus on individual fairness, which
is more difficult to achieve, and more relevant for most location data
processing scenarios. First, we devise a novel building block to achieve
fairness in the form of fair polynomials. Then, we propose two mechanisms based
on fair polynomials that achieve individual fairness, corresponding to two
common interaction types based on location data. Extensive experimental results
on real data show that the proposed mechanisms achieve individual location
fairness without sacrificing utility.
Related papers
- FairJob: A Real-World Dataset for Fairness in Online Systems [2.3622884172290255]
We introduce a fairness-aware dataset for job recommendations in advertising.
It was collected and prepared to comply with privacy standards and business confidentiality.
Despite being anonymized and including a proxy for a sensitive attribute, our dataset preserves predictive power.
arXiv Detail & Related papers (2024-07-03T12:30:39Z) - Toward Fairer Face Recognition Datasets [69.04239222633795]
Face recognition and verification are computer vision tasks whose performance has progressed with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive character of face data and biases in real training datasets hinder their development.
We promote fairness by introducing a demographic attributes balancing mechanism in generated training datasets.
arXiv Detail & Related papers (2024-06-24T12:33:21Z) - Utility-Fairness Trade-Offs and How to Find Them [14.1278892335105]
We introduce two utility-fairness trade-offs: the Data-Space and Label-Space Trade-off.
We propose U-FaTE, a method to numerically quantify the trade-offs for a given prediction task and group fairness definition from data samples.
An extensive evaluation of fair representation learning methods and representations from over 1000 pre-trained models revealed that most current approaches are far from the estimated and achievable fairness-utility trade-offs.
arXiv Detail & Related papers (2024-04-15T04:43:53Z) - Fairness meets Cross-Domain Learning: a new perspective on Models and
Metrics [80.07271410743806]
We study the relationship between cross-domain learning (CD) and model fairness.
We introduce a benchmark on face and medical images spanning several demographic groups as well as classification and localization tasks.
Our study covers 14 CD approaches alongside three state-of-the-art fairness algorithms and shows how the former can outperform the latter.
arXiv Detail & Related papers (2023-03-25T09:34:05Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Fair Spatial Indexing: A paradigm for Group Spatial Fairness [6.640563753223598]
We propose techniques to mitigate location bias in machine learning.
We focus on spatial group fairness and we propose a spatial indexing algorithm that accounts for fairness.
arXiv Detail & Related papers (2023-02-05T05:15:11Z) - Context matters for fairness -- a case study on the effect of spatial
distribution shifts [10.351739012146378]
We present a case study on the newly released American Census datasets.
We show how remarkably can spatial distribution shifts affect predictive- and fairness-related performance of a model.
Our study suggests that robustness to distribution shifts is necessary before deploying a model to another context.
arXiv Detail & Related papers (2022-06-23T01:09:46Z) - FairVFL: A Fair Vertical Federated Learning Framework with Contrastive
Adversarial Learning [102.92349569788028]
We propose a fair vertical federated learning framework (FairVFL) to improve the fairness of VFL models.
The core idea of FairVFL is to learn unified and fair representations of samples based on the decentralized feature fields in a privacy-preserving way.
For protecting user privacy, we propose a contrastive adversarial learning method to remove private information from the unified representation in server.
arXiv Detail & Related papers (2022-06-07T11:43:32Z) - A survey on datasets for fairness-aware machine learning [6.962333053044713]
A large variety of fairness-aware machine learning solutions have been proposed.
In this paper, we overview real-world datasets used for fairness-aware machine learning.
For a deeper understanding of bias and fairness in the datasets, we investigate the interesting relationships using exploratory analysis.
arXiv Detail & Related papers (2021-10-01T16:54:04Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Through the Data Management Lens: Experimental Analysis and Evaluation
of Fair Classification [75.49600684537117]
Data management research is showing an increasing presence and interest in topics related to data and algorithmic fairness.
We contribute a broad analysis of 13 fair classification approaches and additional variants, over their correctness, fairness, efficiency, scalability, and stability.
Our analysis highlights novel insights on the impact of different metrics and high-level approach characteristics on different aspects of performance.
arXiv Detail & Related papers (2021-01-18T22:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.