The Strategic Perceptron
- URL: http://arxiv.org/abs/2008.01710v2
- Date: Mon, 22 Mar 2021 21:50:17 GMT
- Title: The Strategic Perceptron
- Authors: Saba Ahmadi, Hedyeh Beyhaghi, Avrim Blum, Keziah Naggita
- Abstract summary: In presence of strategic agents that desire to be classified as positive, the Perceptron algorithm may not be able to observe the true position of agents.
We show examples with the predictor oscillating between two solutions forever, making an unbounded number of mistakes.
Our main contribution is providing a modified Perceptron-style algorithm which makes a bounded number of mistakes in presence of strategic agents.
- Score: 11.078814063722803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The classical Perceptron algorithm provides a simple and elegant procedure
for learning a linear classifier. In each step, the algorithm observes the
sample's position and label and updates the current predictor accordingly if it
makes a mistake. However, in presence of strategic agents that desire to be
classified as positive and that are able to modify their position by a limited
amount, the classifier may not be able to observe the true position of agents
but rather a position where the agent pretends to be. Unlike the original
setting with perfect knowledge of positions, in this situation the Perceptron
algorithm fails to achieve its guarantees, and we illustrate examples with the
predictor oscillating between two solutions forever, making an unbounded number
of mistakes even though a perfect large-margin linear classifier exists. Our
main contribution is providing a modified Perceptron-style algorithm which
makes a bounded number of mistakes in presence of strategic agents with both
$\ell_2$ and weighted $\ell_1$ manipulation costs. In our baseline model,
knowledge of the manipulation costs (i.e., the extent to which an agent may
manipulate) is assumed. In our most general model, we relax this assumption and
provide an algorithm which learns and refines both the classifier and its cost
estimates to achieve good mistake bounds even when manipulation costs are
unknown.
Related papers
- Mistake, Manipulation and Margin Guarantees in Online Strategic Classification [0.0]
We consider an online strategic classification problem where each arriving agent can manipulate their true feature vector to obtain a positive predicted label.
We prove convergence, finite mistake and finite manipulation guarantees for a variety of agent cost structures.
Our numerical study on real and synthetic data demonstrates that the new algorithms outperform previous ones in terms of margin, number of manipulation and number of mistakes.
arXiv Detail & Related papers (2024-03-27T01:05:45Z) - Learnability Gaps of Strategic Classification [68.726857356532]
We focus on addressing a fundamental question: the learnability gaps between strategic classification and standard learning.
We provide nearly tight sample complexity and regret bounds, offering significant improvements over prior results.
Notably, our algorithm in this setting is of independent interest and can be applied to other problems such as multi-label learning.
arXiv Detail & Related papers (2024-02-29T16:09:19Z) - Regularized Linear Regression for Binary Classification [20.710343135282116]
Regularized linear regression is a promising approach for binary classification problems in which the training set has noisy labels.
We show that for large enough regularization strength, the optimal weights concentrate around two values of opposite sign.
We observe that in many cases the corresponding "compression" of each weight to a single bit leads to very little loss in performance.
arXiv Detail & Related papers (2023-11-03T23:18:21Z) - When Does Confidence-Based Cascade Deferral Suffice? [69.28314307469381]
Cascades are a classical strategy to enable inference cost to vary adaptively across samples.
A deferral rule determines whether to invoke the next classifier in the sequence, or to terminate prediction.
Despite being oblivious to the structure of the cascade, confidence-based deferral often works remarkably well in practice.
arXiv Detail & Related papers (2023-07-06T04:13:57Z) - Fundamental Bounds on Online Strategic Classification [13.442155854812528]
We show that no deterministic algorithm can achieve a mistake bound $o(Delta)$ in the strategic setting.
We also extend this to the agnostic setting and obtain an algorithm with a $Delta$ multiplicative regret.
We design randomized algorithms that achieve sublinear regret bounds against both oblivious and adaptive adversaries.
arXiv Detail & Related papers (2023-02-23T22:39:43Z) - Is the Performance of My Deep Network Too Good to Be True? A Direct
Approach to Estimating the Bayes Error in Binary Classification [86.32752788233913]
In classification problems, the Bayes error can be used as a criterion to evaluate classifiers with state-of-the-art performance.
We propose a simple and direct Bayes error estimator, where we just take the mean of the labels that show emphuncertainty of the classes.
Our flexible approach enables us to perform Bayes error estimation even for weakly supervised data.
arXiv Detail & Related papers (2022-02-01T13:22:26Z) - Dealing With Misspecification In Fixed-Confidence Linear Top-m
Identification [0.0]
We study the problem of the identification of m arms with largest means under a fixed error rate $delta$ (fixed-confidence Top-m identification)
This problem is motivated by practical applications, especially in medicine and recommendation systems.
arXiv Detail & Related papers (2021-11-02T10:27:17Z) - ReLU Regression with Massart Noise [52.10842036932169]
We study the fundamental problem of ReLU regression, where the goal is to fit Rectified Linear Units (ReLUs) to data.
We focus on ReLU regression in the Massart noise model, a natural and well-studied semi-random noise model.
We develop an efficient algorithm that achieves exact parameter recovery in this model.
arXiv Detail & Related papers (2021-09-10T02:13:22Z) - Calibrating Predictions to Decisions: A Novel Approach to Multi-Class
Calibration [118.26862029820447]
We introduce a new notion -- emphdecision calibration -- that requires the predicted distribution and true distribution to be indistinguishable'' to a set of downstream decision-makers.
Decision calibration improves decision-making on skin lesions and ImageNet classification with modern neural network.
arXiv Detail & Related papers (2021-07-12T20:17:28Z) - Autocalibration and Tweedie-dominance for Insurance Pricing with Machine
Learning [0.0]
It is shown that minimizing deviance involves a trade-off between the integral of weighted differences of lower partial moments and the bias measured on a specific scale.
This new method to correct for bias adds extra local GLM step to the analysis.
The convex order appears to be the natural tool to compare competing models.
arXiv Detail & Related papers (2021-03-05T12:40:30Z) - Uncertainty Sets for Image Classifiers using Conformal Prediction [112.54626392838163]
We present an algorithm that modifies any classifier to output a predictive set containing the true label with a user-specified probability, such as 90%.
The algorithm is simple and fast like Platt scaling, but provides a formal finite-sample coverage guarantee for every model and dataset.
Our method modifies an existing conformal prediction algorithm to give more stable predictive sets by regularizing the small scores of unlikely classes after Platt scaling.
arXiv Detail & Related papers (2020-09-29T17:58:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.