Matchings, Predictions and Counterfactual Harm in Refugee Resettlement Processes
- URL: http://arxiv.org/abs/2407.13052v1
- Date: Fri, 24 May 2024 19:51:01 GMT
- Title: Matchings, Predictions and Counterfactual Harm in Refugee Resettlement Processes
- Authors: Seungeon Lee, Nina Corvelo Benz, Suhas Thejaswi, Manuel Gomez-Rodriguez,
- Abstract summary: Data-driven algorithmic matching to match refugees to locations using employment rate as a measure of utility.
We develop a post-processing algorithm that, given placement decisions made by a default policy on a pool of refugees, solves an inversematching problem.
Under these modified predictions, the optimal matching policy that maximizes predicted utility on the pool is guaranteed to be not harmful.
- Score: 15.140146403589952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Resettlement agencies have started to adopt data-driven algorithmic matching to match refugees to locations using employment rate as a measure of utility. Given a pool of refugees, data-driven algorithmic matching utilizes a classifier to predict the probability that each refugee would find employment at any given location. Then, it uses the predicted probabilities to estimate the expected utility of all possible placement decisions. Finally, it finds the placement decisions that maximize the predicted utility by solving a maximum weight bipartite matching problem. In this work, we argue that, using existing solutions, there may be pools of refugees for which data-driven algorithmic matching is (counterfactually) harmful -- it would have achieved lower utility than a given default policy used in the past, had it been used. Then, we develop a post-processing algorithm that, given placement decisions made by a default policy on a pool of refugees and their employment outcomes, solves an inverse~matching problem to minimally modify the predictions made by a given classifier. Under these modified predictions, the optimal matching policy that maximizes predicted utility on the pool is guaranteed to be not harmful. Further, we introduce a Transformer model that, given placement decisions made by a default policy on multiple pools of refugees and their employment outcomes, learns to modify the predictions made by a classifier so that the optimal matching policy that maximizes predicted utility under the modified predictions on an unseen pool of refugees is less likely to be harmful than under the original predictions. Experiments on simulated resettlement processes using synthetic refugee data created from a variety of publicly available data suggest that our methodology may be effective in making algorithmic placement decisions that are less likely to be harmful than existing solutions.
Related papers
- Clipped SGD Algorithms for Privacy Preserving Performative Prediction: Bias Amplification and Remedies [28.699424769503764]
Clipped gradient descent (SGD) algorithms are among the most popular algorithms for privacy preserving optimization.
This paper studies the convergence properties of these algorithms in a performative prediction setting.
arXiv Detail & Related papers (2024-04-17T02:17:05Z) - Partial-Label Learning with a Reject Option [3.1201323892302444]
We propose a novel partial-label learning algorithm with a reject option, that is, the algorithm can reject unsure predictions.
Our method provides the best trade-off between the number and accuracy of non-rejected predictions when compared to our competitors.
arXiv Detail & Related papers (2024-02-01T13:41:44Z) - Experiment Planning with Function Approximation [49.50254688629728]
We study the problem of experiment planning with function approximation in contextual bandit problems.
We propose two experiment planning strategies compatible with function approximation.
We show that a uniform sampler achieves competitive optimality rates in the setting where the number of actions is small.
arXiv Detail & Related papers (2024-01-10T14:40:23Z) - Policy learning "without" overlap: Pessimism and generalized empirical Bernstein's inequality [94.89246810243053]
This paper studies offline policy learning, which aims at utilizing observations collected a priori to learn an optimal individualized decision rule.
Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics must be lower bounded.
We propose Pessimistic Policy Learning (PPL), a new algorithm that optimize lower confidence bounds (LCBs) instead of point estimates.
arXiv Detail & Related papers (2022-12-19T22:43:08Z) - Robust Design and Evaluation of Predictive Algorithms under Unobserved Confounding [2.8498944632323755]
We propose a unified framework for the robust design and evaluation of predictive algorithms in selectively observed data.
We impose general assumptions on how much the outcome may vary on average between unselected and selected units.
We develop debiased machine learning estimators for the bounds on a large class of predictive performance estimands.
arXiv Detail & Related papers (2022-12-19T20:41:44Z) - Off-Policy Evaluation with Policy-Dependent Optimization Response [90.28758112893054]
We develop a new framework for off-policy evaluation with a textitpolicy-dependent linear optimization response.
We construct unbiased estimators for the policy-dependent estimand by a perturbation method.
We provide a general algorithm for optimizing causal interventions.
arXiv Detail & Related papers (2022-02-25T20:25:37Z) - Efficient and Differentiable Conformal Prediction with General Function
Classes [96.74055810115456]
We propose a generalization of conformal prediction to multiple learnable parameters.
We show that it achieves approximate valid population coverage and near-optimal efficiency within class.
Experiments show that our algorithm is able to learn valid prediction sets and improve the efficiency significantly.
arXiv Detail & Related papers (2022-02-22T18:37:23Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - On the Optimality of Batch Policy Optimization Algorithms [106.89498352537682]
Batch policy optimization considers leveraging existing data for policy construction before interacting with an environment.
We show that any confidence-adjusted index algorithm is minimax optimal, whether it be optimistic, pessimistic or neutral.
We introduce a new weighted-minimax criterion that considers the inherent difficulty of optimal value prediction.
arXiv Detail & Related papers (2021-04-06T05:23:20Z) - Outcome-Driven Dynamic Refugee Assignment with Allocation Balancing [0.0]
We propose two new dynamic assignment algorithms to match refugees and asylum seekers to geographic localities within a host country.
The first seeks to maximize the average predicted employment level (or any measured outcome of interest) of refugees through a minimum-discord online assignment algorithm.
The second algorithm balances the goal of improving refugee outcomes with the desire for an even allocation over time.
arXiv Detail & Related papers (2020-07-02T21:28:15Z) - Causality and Robust Optimization [2.690502103971798]
Cofounding bias is a problem when applying machine learning prediction.
We propose a meta-algorithm that can remedy existing feature selection algorithms in terms of cofounding bias.
arXiv Detail & Related papers (2020-02-28T10:02:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.