Learning Fair Policies for Multi-stage Selection Problems from
Observational Data
- URL: http://arxiv.org/abs/2312.13173v1
- Date: Wed, 20 Dec 2023 16:33:15 GMT
- Title: Learning Fair Policies for Multi-stage Selection Problems from
Observational Data
- Authors: Zhuangzhuang Jia, Grani A. Hanasusanto, Phebe Vayanos and Weijun Xie
- Abstract summary: We consider the problem of learning fair policies for multi-stage selection problems from observational data.
This problem arises in several high-stakes domains such as company hiring, loan approval, or bail decisions where outcomes are only observed for those selected.
We propose a multi-stage framework that can be augmented with various fairness constraints, such as demographic parity or equal opportunity.
- Score: 4.282745020665833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of learning fair policies for multi-stage selection
problems from observational data. This problem arises in several high-stakes
domains such as company hiring, loan approval, or bail decisions where outcomes
(e.g., career success, loan repayment, recidivism) are only observed for those
selected. We propose a multi-stage framework that can be augmented with various
fairness constraints, such as demographic parity or equal opportunity. This
problem is a highly intractable infinite chance-constrained program involving
the unknown joint distribution of covariates and outcomes. Motivated by the
potential impact of selection decisions on people's lives and livelihoods, we
propose to focus on interpretable linear selection rules. Leveraging tools from
causal inference and sample average approximation, we obtain an asymptotically
consistent solution to this selection problem by solving a mixed binary conic
optimization problem, which can be solved using standard off-the-shelf solvers.
We conduct extensive computational experiments on a variety of datasets adapted
from the UCI repository on which we show that our proposed approaches can
achieve an 11.6% improvement in precision and a 38% reduction in the measure of
unfairness compared to the existing selection policy.
Related papers
- Illuminating the Diversity-Fitness Trade-Off in Black-Box Optimization [9.838618121102053]
In real-world applications, users often favor structurally diverse design choices over one high-quality solution.
This paper presents a fresh perspective on this challenge by considering the problem of identifying a fixed number of solutions with a pairwise distance above a specified threshold.
arXiv Detail & Related papers (2024-08-29T09:55:55Z) - Best Arm Identification with Fixed Budget: A Large Deviation Perspective [54.305323903582845]
We present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
In particular, we present sred, a truly adaptive algorithm that can reject arms in it any round based on the observed empirical gaps between the rewards of various arms.
arXiv Detail & Related papers (2023-12-19T13:17:43Z) - Tackling Diverse Minorities in Imbalanced Classification [80.78227787608714]
Imbalanced datasets are commonly observed in various real-world applications, presenting significant challenges in training classifiers.
We propose generating synthetic samples iteratively by mixing data samples from both minority and majority classes.
We demonstrate the effectiveness of our proposed framework through extensive experiments conducted on seven publicly available benchmark datasets.
arXiv Detail & Related papers (2023-08-28T18:48:34Z) - Multi-Target Multiplicity: Flexibility and Fairness in Target
Specification under Resource Constraints [76.84999501420938]
We introduce a conceptual and computational framework for assessing how the choice of target affects individuals' outcomes.
We show that the level of multiplicity that stems from target variable choice can be greater than that stemming from nearly-optimal models of a single target.
arXiv Detail & Related papers (2023-06-23T18:57:14Z) - Improving Probability-based Prompt Selection Through Unified Evaluation
and Analysis [52.04932081106623]
We propose a unified framework to interpret and evaluate the existing probability-based prompt selection methods.
We find that each of the existing methods can be interpreted as some variant of the method that maximizes mutual information between the input and the predicted output (MI)
We propose a novel calibration method called by Marginalization (CBM) that is to the existing methods and helps increase the prompt selection effectiveness of the best method to 96.85%, achieving 99.44% of the oracle prompt F1 without calibration.
arXiv Detail & Related papers (2023-05-24T08:29:50Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Bi-objective Ranking and Selection Using Stochastic Kriging [0.0]
We consider bi-objective ranking and selection problems in which the two objective outcomes have been observed with uncertainty.
We propose a novel Bayesian bi-objective ranking and selection method that sequentially allocates extra samples to competitive solutions.
Experimental results show that the proposed method outperforms the standard allocation method, as well as a well-known state-of-the-art algorithm.
arXiv Detail & Related papers (2022-09-05T23:51:07Z) - Variance-Reduced Heterogeneous Federated Learning via Stratified Client
Selection [31.401919362978017]
We propose a novel stratified client selection scheme to reduce the variance for the pursuit of better convergence and higher accuracy.
We present an optimized sample size allocation scheme by considering the diversity of stratum's variability.
Experimental results confirm that our approach not only allows for better performance relative to state-of-the-art methods but also is compatible with prevalent FL algorithms.
arXiv Detail & Related papers (2022-01-15T05:41:36Z) - Model Selection in Batch Policy Optimization [88.52887493684078]
We study the problem of model selection in batch policy optimization.
We identify three sources of error that any model selection algorithm should optimally trade-off in order to be competitive.
arXiv Detail & Related papers (2021-12-23T02:31:50Z) - Fair Incentives for Repeated Engagement [0.46040036610482665]
We study a problem of finding optimal monetary incentive schemes for retention when faced with agents whose participation decisions depend on the incentive they receive.
We show that even in the absence of explicit discrimination, policies may unintentionally discriminate between agents of different types by varying the type composition of the system.
arXiv Detail & Related papers (2021-10-28T04:13:53Z) - Robust Active Preference Elicitation [10.961537256186498]
We study the problem of eliciting the preferences of a decision-maker through a moderate number of pairwise comparison queries.
We are motivated by applications in high stakes domains, such as when choosing a policy for allocating scarce resources.
arXiv Detail & Related papers (2020-03-04T05:24:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.