A Flexible Defense Against the Winner's Curse
- URL: http://arxiv.org/abs/2411.18569v1
- Date: Wed, 27 Nov 2024 18:14:08 GMT
- Title: A Flexible Defense Against the Winner's Curse
- Authors: Tijana Zrnic, William Fithian,
- Abstract summary: cherry-picking the best candidate leads to the winner's curse.
We introduce the zoom correction, a novel approach for valid inference on the winner.
- Score: 11.962736179290413
- License:
- Abstract: Across science and policy, decision-makers often need to draw conclusions about the best candidate among competing alternatives. For instance, researchers may seek to infer the effectiveness of the most successful treatment or determine which demographic group benefits most from a specific treatment. Similarly, in machine learning, practitioners are often interested in the population performance of the model that performs best empirically. However, cherry-picking the best candidate leads to the winner's curse: the observed performance for the winner is biased upwards, rendering conclusions based on standard measures of uncertainty invalid. We introduce the zoom correction, a novel approach for valid inference on the winner. Our method is flexible: it can be employed in both parametric and nonparametric settings, can handle arbitrary dependencies between candidates, and automatically adapts to the level of selection bias. The method easily extends to important related problems, such as inference on the top k winners, inference on the value and identity of the population winner, and inference on "near-winners."
Related papers
- (Un)certainty of (Un)fairness: Preference-Based Selection of Certainly Fair Decision-Makers [0.0]
Fairness metrics are used to assess discrimination and bias in decision-making processes across various domains.
We quantify the uncertainty of the disparity to enhance discrimination assessments.
We define preferences over decision-makers and utilize brute-force to choose the optimal decision-maker.
arXiv Detail & Related papers (2024-09-19T11:44:03Z) - Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference [50.95521705711802]
Previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model.
This paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference.
We propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect.
arXiv Detail & Related papers (2024-04-30T15:20:41Z) - Online Decision Mediation [72.80902932543474]
Consider learning a decision support assistant to serve as an intermediary between (oracle) expert behavior and (imperfect) human behavior.
In clinical diagnosis, fully-autonomous machine behavior is often beyond ethical affordances.
arXiv Detail & Related papers (2023-10-28T05:59:43Z) - Provable Benefits of Policy Learning from Human Preferences in
Contextual Bandit Problems [82.92678837778358]
preference-based methods have demonstrated substantial success in empirical applications such as InstructGPT.
We show how human bias and uncertainty in feedback modelings can affect the theoretical guarantees of these approaches.
arXiv Detail & Related papers (2023-07-24T17:50:24Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Fairness in Selection Problems with Strategic Candidates [9.4148805532663]
We study how the strategic aspect affects fairness in selection problems.
A population of rational candidates compete by choosing an effort level to increase their quality.
We characterize the (unique) equilibrium of this game in the different parameters' regimes.
arXiv Detail & Related papers (2022-05-24T17:03:32Z) - Towards Equal Opportunity Fairness through Adversarial Learning [64.45845091719002]
Adversarial training is a common approach for bias mitigation in natural language processing.
We propose an augmented discriminator for adversarial training, which takes the target class as input to create richer features.
arXiv Detail & Related papers (2022-03-12T02:22:58Z) - On Fair Selection in the Presence of Implicit and Differential Variance [22.897402186120434]
We study a model where the decision maker receives a noisy estimate of each candidate's quality, whose variance depends on the candidate's group.
We show that both baseline decision makers yield discrimination, although in opposite directions.
arXiv Detail & Related papers (2021-12-10T16:04:13Z) - MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for
Answer Selection [59.95429407899612]
We propose a novel reinforcement learning based multi-step ranking model, named MS-Ranker.
We explicitly consider the potential correctness of candidates and update the evidence with a gating mechanism.
Our model significantly outperforms existing methods that do not rely on external resources.
arXiv Detail & Related papers (2020-10-10T10:36:58Z) - Intersectional Affirmative Action Policies for Top-k Candidates
Selection [3.4961413413444817]
We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
arXiv Detail & Related papers (2020-07-29T12:27:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.