Auditing for Spatial Fairness
- URL: http://arxiv.org/abs/2302.12333v1
- Date: Thu, 23 Feb 2023 20:56:18 GMT
- Title: Auditing for Spatial Fairness
- Authors: Dimitris Sacharidis, Giorgos Giannopoulos, George Papastefanatos,
Kostas Stefanidis
- Abstract summary: We study algorithmic fairness when the protected attribute is location.
Similar to established notions of algorithmic fairness, we define spatial fairness as the statistical independence of outcomes from location.
- Score: 5.048742886625779
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper studies algorithmic fairness when the protected attribute is
location. To handle protected attributes that are continuous, such as age or
income, the standard approach is to discretize the domain into predefined
groups, and compare algorithmic outcomes across groups. However, applying this
idea to location raises concerns of gerrymandering and may introduce
statistical bias. Prior work addresses these concerns but only for regularly
spaced locations, while raising other issues, most notably its inability to
discern regions that are likely to exhibit spatial unfairness. Similar to
established notions of algorithmic fairness, we define spatial fairness as the
statistical independence of outcomes from location. This translates into
requiring that for each region of space, the distribution of outcomes is
identical inside and outside the region. To allow for localized discrepancies
in the distribution of outcomes, we compare how well two competing hypotheses
explain the observed outcomes. The null hypothesis assumes spatial fairness,
while the alternate allows different distributions inside and outside regions.
Their goodness of fit is then assessed by a likelihood ratio test. If there is
no significant difference in how well the two hypotheses explain the observed
outcomes, we conclude that the algorithm is spatially fair.
Related papers
- Implementing Fairness: the view from a FairDream [0.0]
We train an AI model and develop our own fairness package FairDream to detect inequalities and then to correct for them.
Our experiments show that it is a property of FairDream to fulfill fairness objectives which are conditional on the ground truth.
arXiv Detail & Related papers (2024-07-20T06:06:24Z) - Guarantee Regions for Local Explanations [29.429229877959663]
We propose an anchor-based algorithm for identifying regions in which local explanations are guaranteed to be correct.
Our method produces an interpretable feature-aligned box where the prediction of the local surrogate model is guaranteed to match the predictive model.
arXiv Detail & Related papers (2024-02-20T06:04:44Z) - Causal Fair Machine Learning via Rank-Preserving Interventional Distributions [0.5062312533373299]
We define individuals as being normatively equal if they are equal in a fictitious, normatively desired (FiND) world.
We propose rank-preserving interventional distributions to define a specific FiND world in which this holds.
We show that our warping approach effectively identifies the most discriminated individuals and mitigates unfairness.
arXiv Detail & Related papers (2023-07-24T13:46:50Z) - Proportional Fairness in Obnoxious Facility Location [70.64736616610202]
We propose a hierarchy of distance-based proportional fairness concepts for the problem.
We consider deterministic and randomized mechanisms, and compute tight bounds on the price of proportional fairness.
We prove existence results for two extensions to our model.
arXiv Detail & Related papers (2023-01-11T07:30:35Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - On Disentangled and Locally Fair Representations [95.6635227371479]
We study the problem of performing classification in a manner that is fair for sensitive groups, such as race and gender.
We learn a locally fair representation, such that, under the learned representation, the neighborhood of each sample is balanced in terms of the sensitive attribute.
arXiv Detail & Related papers (2022-05-05T14:26:50Z) - Nested Counterfactual Identification from Arbitrary Surrogate
Experiments [95.48089725859298]
We study the identification of nested counterfactuals from an arbitrary combination of observations and experiments.
Specifically, we prove the counterfactual unnesting theorem (CUT), which allows one to map arbitrary nested counterfactuals to unnested ones.
arXiv Detail & Related papers (2021-07-07T12:51:04Z) - On Localized Discrepancy for Domain Adaptation [146.4580736832752]
This paper studies the localized discrepancies defined on the hypothesis space after localization.
Their values will be different if we exchange the two domains, thus can reveal asymmetric transfer difficulties.
arXiv Detail & Related papers (2020-08-14T08:30:02Z) - Fast Fair Regression via Efficient Approximations of Mutual Information [0.0]
This paper introduces fast approximations of the independence, separation and sufficiency group fairness criteria for regression models.
It uses such approximations as regularisers to enforce fairness within a regularised risk minimisation framework.
Experiments in real-world datasets indicate that in spite of its superior computational efficiency our algorithm still displays state-of-the-art accuracy/fairness tradeoffs.
arXiv Detail & Related papers (2020-02-14T08:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.