Maximizing Welfare with Incentive-Aware Evaluation Mechanisms
- URL: http://arxiv.org/abs/2011.01956v1
- Date: Tue, 3 Nov 2020 19:00:43 GMT
- Title: Maximizing Welfare with Incentive-Aware Evaluation Mechanisms
- Authors: Nika Haghtalab, Nicole Immorlica, Brendan Lucier, Jack Z. Wang
- Abstract summary: We propose an evaluation problem where the inputs are controlled by strategic individuals who can modify their features at a cost.
A learner can only partially observe the features, and aims to classify individuals with respect to a quality score.
- Score: 18.304048425012503
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by applications such as college admission and insurance rate
determination, we propose an evaluation problem where the inputs are controlled
by strategic individuals who can modify their features at a cost. A learner can
only partially observe the features, and aims to classify individuals with
respect to a quality score. The goal is to design an evaluation mechanism that
maximizes the overall quality score, i.e., welfare, in the population, taking
any strategic updating into account. We further study the algorithmic aspect of
finding the welfare maximizing evaluation mechanism under two specific settings
in our model. When scores are linear and mechanisms use linear scoring rules on
the observable features, we show that the optimal evaluation mechanism is an
appropriate projection of the quality score. When mechanisms must use linear
thresholds, we design a polynomial time algorithm with a (1/4)-approximation
guarantee when the underlying feature distribution is sufficiently smooth and
admits an oracle for finding dense regions. We extend our results to settings
where the prior distribution is unknown and must be learned from samples.
Related papers
- Iterative Feature Space Optimization through Incremental Adaptive Evaluation [30.404285897464256]
Iterative feature space optimization involves systematically evaluating and adjusting the feature space to improve downstream task performance.
Existing works suffer from three key limitations.
overlooking differences among data samples leads to evaluation bias.
tailoring feature spaces to specific machine learning models results in overfitting and poor generalization.
We propose a gEneralized Adaptive feature Space Evaluator to efficiently produce optimal and generalized feature spaces.
arXiv Detail & Related papers (2025-01-24T19:24:20Z) - A Hybrid Framework for Statistical Feature Selection and Image-Based Noise-Defect Detection [55.2480439325792]
This paper presents a hybrid framework that integrates both statistical feature selection and classification techniques to improve defect detection accuracy.
We present around 55 distinguished features that are extracted from industrial images, which are then analyzed using statistical methods.
By integrating these methods with flexible machine learning applications, the proposed framework improves detection accuracy and reduces false positives and misclassifications.
arXiv Detail & Related papers (2024-12-11T22:12:21Z) - Self-Improvement in Language Models: The Sharpening Mechanism [70.9248553790022]
We offer a new perspective on the capabilities of self-improvement through a lens we refer to as sharpening.
Motivated by the observation that language models are often better at verifying response quality than they are at generating correct responses, we formalize self-improvement as using the model itself as a verifier during post-training.
We analyze two natural families of self-improvement algorithms based on SFT and RLHF.
arXiv Detail & Related papers (2024-12-02T20:24:17Z) - A Probabilistic Perspective on Unlearning and Alignment for Large Language Models [48.96686419141881]
We introduce the first formal probabilistic evaluation framework for Large Language Models (LLMs)
Namely, we propose novel metrics with high probability guarantees concerning the output distribution of a model.
Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment.
arXiv Detail & Related papers (2024-10-04T15:44:23Z) - Automatic Generation of Attention Rules For Containment of Machine
Learning Model Errors [1.4987559345379062]
We present several algorithms (strategies') for determining optimal rules to separate observations.
In particular, we prefer strategies that use feature-based slicing because they are human-interpretable, model-agnostic, and require minimal supplementary inputs or knowledge.
To evaluate strategies, we introduce metrics to measure various desired qualities, such as their performance, stability, and generalizability to unseen data.
arXiv Detail & Related papers (2023-05-14T10:15:35Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Sequential Attention for Feature Selection [12.89764845700709]
We propose a feature selection algorithm called Sequential Attention that achieves state-of-the-art empirical results for neural networks.
We give theoretical insights into our algorithm for linear regression by showing that an adaptation to this setting is equivalent to the classical Orthogonal Matching Pursuit (OMP) algorithm.
arXiv Detail & Related papers (2022-09-29T15:49:06Z) - Domain-Adjusted Regression or: ERM May Already Learn Features Sufficient
for Out-of-Distribution Generalization [52.7137956951533]
We argue that devising simpler methods for learning predictors on existing features is a promising direction for future research.
We introduce Domain-Adjusted Regression (DARE), a convex objective for learning a linear predictor that is provably robust under a new model of distribution shift.
Under a natural model, we prove that the DARE solution is the minimax-optimal predictor for a constrained set of test distributions.
arXiv Detail & Related papers (2022-02-14T16:42:16Z) - Towards a Fairness-Aware Scoring System for Algorithmic Decision-Making [35.21763166288736]
We propose a general framework to create data-driven fairness-aware scoring systems.
We show that the proposed framework provides practitioners or policymakers great flexibility to select their desired fairness requirements.
arXiv Detail & Related papers (2021-09-21T09:46:35Z) - Towards Explainable Exploratory Landscape Analysis: Extreme Feature
Selection for Classifying BBOB Functions [4.932130498861987]
We show that a surprisingly small number of features -- often less than four -- can suffice to achieve a 98% accuracy.
We show that the classification accuracy transfers to settings in which several instances are involved in training and testing.
arXiv Detail & Related papers (2021-02-01T10:04:28Z) - VCG Mechanism Design with Unknown Agent Values under Stochastic Bandit
Feedback [104.06766271716774]
We study a multi-round welfare-maximising mechanism design problem in instances where agents do not know their values.
We first define three notions of regret for the welfare, the individual utilities of each agent and that of the mechanism.
Our framework also provides flexibility to control the pricing scheme so as to trade-off between the agent and seller regrets.
arXiv Detail & Related papers (2020-04-19T18:00:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.