Fair Spatial Indexing: A paradigm for Group Spatial Fairness
- URL: http://arxiv.org/abs/2302.02306v1
- Date: Sun, 5 Feb 2023 05:15:11 GMT
- Title: Fair Spatial Indexing: A paradigm for Group Spatial Fairness
- Authors: Sina Shaham, Gabriel Ghinita, Cyrus Shahabi
- Abstract summary: We propose techniques to mitigate location bias in machine learning.
We focus on spatial group fairness and we propose a spatial indexing algorithm that accounts for fairness.
- Score: 6.640563753223598
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) is playing an increasing role in decision-making tasks
that directly affect individuals, e.g., loan approvals, or job applicant
screening. Significant concerns arise that, without special provisions,
individuals from under-privileged backgrounds may not get equitable access to
services and opportunities. Existing research studies fairness with respect to
protected attributes such as gender, race or income, but the impact of location
data on fairness has been largely overlooked. With the widespread adoption of
mobile apps, geospatial attributes are increasingly used in ML, and their
potential to introduce unfair bias is significant, given their high correlation
with protected attributes. We propose techniques to mitigate location bias in
machine learning. Specifically, we consider the issue of miscalibration when
dealing with geospatial attributes. We focus on spatial group fairness and we
propose a spatial indexing algorithm that accounts for fairness. Our KD-tree
inspired approach significantly improves fairness while maintaining high
learning accuracy, as shown by extensive experimental results on real data.
Related papers
- Thinking Racial Bias in Fair Forgery Detection: Models, Datasets and Evaluations [63.52709761339949]
We first contribute a dedicated dataset called the Fair Forgery Detection (FairFD) dataset, where we prove the racial bias of public state-of-the-art (SOTA) methods.
We design novel metrics including Approach Averaged Metric and Utility Regularized Metric, which can avoid deceptive results.
We also present an effective and robust post-processing technique, Bias Pruning with Fair Activations (BPFA), which improves fairness without requiring retraining or weight updates.
arXiv Detail & Related papers (2024-07-19T14:53:18Z) - FairJob: A Real-World Dataset for Fairness in Online Systems [2.3622884172290255]
We introduce a fairness-aware dataset for job recommendations in advertising.
It was collected and prepared to comply with privacy standards and business confidentiality.
Despite being anonymized and including a proxy for a sensitive attribute, our dataset preserves predictive power.
arXiv Detail & Related papers (2024-07-03T12:30:39Z) - What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning [52.51430732904994]
In reinforcement learning problems, agents must consider long-term fairness while maximizing returns.
Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear.
We introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics.
arXiv Detail & Related papers (2024-04-16T22:47:59Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - When Fairness Meets Privacy: Fair Classification with Semi-Private
Sensitive Attributes [18.221858247218726]
We study a novel and practical problem of fair classification in a semi-private setting.
Most of the sensitive attributes are private and only a small amount of clean ones are available.
We propose a novel framework FairSP that can achieve Fair prediction under the Semi-Private setting.
arXiv Detail & Related papers (2022-07-18T01:10:25Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fairness without the sensitive attribute via Causal Variational
Autoencoder [17.675997789073907]
Due to privacy purposes and var-ious regulations such as RGPD in EU, many personal sensitive attributes are frequently not collected.
By leveraging recent developments for approximate inference, we propose an approach to fill this gap.
Based on a causal graph, we rely on a new variational auto-encoding based framework named SRCVAE to infer a sensitive information proxy.
arXiv Detail & Related papers (2021-09-10T17:12:52Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Fairness without Demographics through Adversarially Reweighted Learning [20.803276801890657]
We train an ML model to improve fairness when we do not even know the protected group memberships.
In particular, we hypothesize that non-protected features and task labels are valuable for identifying fairness issues.
Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets.
arXiv Detail & Related papers (2020-06-23T16:06:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.