Under the Radar -- Auditing Fairness in ML for Humanitarian Mapping
- URL: http://arxiv.org/abs/2108.02137v1
- Date: Wed, 4 Aug 2021 16:11:39 GMT
- Title: Under the Radar -- Auditing Fairness in ML for Humanitarian Mapping
- Authors: Lukas Kondmann, Xiao Xiang Zhu
- Abstract summary: We study if humanitarian mapping approaches from space are prone to bias in their predictions.
We map village-level poverty and electricity rates in India based on nighttime lights (NTLs) with linear regression and random forest.
Our findings indicate that poverty is systematically overestimated and electricity systematically underestimated for scheduled tribes.
- Score: 15.241948239953444
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humanitarian mapping from space with machine learning helps policy-makers to
timely and accurately identify people in need. However, recent concerns around
fairness and transparency of algorithmic decision-making are a significant
obstacle for applying these methods in practice. In this paper, we study if
humanitarian mapping approaches from space are prone to bias in their
predictions. We map village-level poverty and electricity rates in India based
on nighttime lights (NTLs) with linear regression and random forest and analyze
if the predictions systematically show prejudice against scheduled caste or
tribe communities. To achieve this, we design a causal approach to measure
counterfactual fairness based on propensity score matching. This allows to
compare villages within a community of interest to synthetic counterfactuals.
Our findings indicate that poverty is systematically overestimated and
electricity systematically underestimated for scheduled tribes in comparison to
a synthetic counterfactual group of villages. The effects have the opposite
direction for scheduled castes where poverty is underestimated and
electrification overestimated. These results are a warning sign for a variety
of applications in humanitarian mapping where fairness issues would compromise
policy goals.
Related papers
- Fine-Grained Socioeconomic Prediction from Satellite Images with
Distributional Adjustment [14.076490368696508]
We propose a method that assigns a socioeconomic score to each satellite image by capturing the distributional behavior observed in larger areas.
We train an ordinal regression scoring model and adjust the scores to follow the common power law within and across regions.
Our method also demonstrates robust performance in districts with uneven development, suggesting its potential use in developing countries.
arXiv Detail & Related papers (2023-08-30T12:06:04Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Fairness and representation in satellite-based poverty maps: Evidence of
urban-rural disparities and their impacts on downstream policy [5.456665139074406]
This paper investigates disparities in representation, systematic biases in prediction errors, and fairness concerns in satellite-based poverty mapping across urban and rural lines.
Our findings highlight the importance of careful error and bias analysis before using satellite-based poverty maps in real-world policy decisions.
arXiv Detail & Related papers (2023-05-02T21:07:35Z) - Fairness-enhancing deep learning for ride-hailing demand prediction [3.911105164672852]
Short-term demand forecasting for on-demand ride-hailing services is one of the fundamental issues in intelligent transportation systems.
Previous travel demand forecasting research predominantly focused on improving prediction accuracy, ignoring fairness issues.
This study investigates how to measure, evaluate, and enhance prediction fairness between disadvantaged and privileged communities.
arXiv Detail & Related papers (2023-03-10T04:37:14Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Metrizing Fairness [5.323439381187456]
We study supervised learning problems that have significant effects on individuals from two demographic groups.
We seek predictors that are fair with respect to a group fairness criterion such as statistical parity (SP)
In this paper, we identify conditions under which hard SP constraints are guaranteed to improve predictive accuracy.
arXiv Detail & Related papers (2022-05-30T12:28:10Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Predicting Livelihood Indicators from Community-Generated Street-Level
Imagery [70.5081240396352]
We propose an inexpensive, scalable, and interpretable approach to predict key livelihood indicators from public crowd-sourced street-level imagery.
By comparing our results against ground data collected in nationally-representative household surveys, we demonstrate the performance of our approach in accurately predicting indicators of poverty, population, and health.
arXiv Detail & Related papers (2020-06-15T18:12:12Z) - Generating Interpretable Poverty Maps using Object Detection in
Satellite Images [80.35540308137043]
We demonstrate an interpretable computational framework to accurately predict poverty at a local level by applying object detectors to satellite images.
Using the weighted counts of objects as features, we achieve 0.539 Pearson's r2 in predicting village-level poverty in Uganda, a 31% improvement over existing (and less interpretable) benchmarks.
arXiv Detail & Related papers (2020-02-05T02:50:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.