Optimizing the Performative Risk under Weak Convexity Assumptions
- URL: http://arxiv.org/abs/2209.00771v1
- Date: Fri, 2 Sep 2022 01:07:09 GMT
- Title: Optimizing the Performative Risk under Weak Convexity Assumptions
- Authors: Yulai Zhao
- Abstract summary: In performative prediction, a predictive model impacts the distribution that generates future data.
Prior work has identified a pair of general conditions on the loss and the mapping from model parameters to distributions that implies convexity the performative risk.
In this paper, we relax these assumptions, without sacrificing the amenability of the performative minimization risk problem for iterative optimization methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In performative prediction, a predictive model impacts the distribution that
generates future data, a phenomenon that is being ignored in classical
supervised learning. In this closed-loop setting, the natural measure of
performance, denoted the performative risk, captures the expected loss incurred
by a predictive model after deployment. The core difficulty of minimizing the
performative risk is that the data distribution itself depends on the model
parameters. This dependence is governed by the environment and not under the
control of the learner. As a consequence, even the choice of a convex loss
function can result in a highly non-convex performative risk minimization
problem. Prior work has identified a pair of general conditions on the loss and
the mapping from model parameters to distributions that implies convexity of
the performative risk. In this paper, we relax these assumptions and focus on
obtaining weaker notions of convexity, without sacrificing the amenability of
the performative risk minimization problem for iterative optimization methods.
Related papers
- Error Bounds of Supervised Classification from Information-Theoretic Perspective [0.0]
We explore bounds on the expected risk when using deep neural networks for supervised classification from an information theoretic perspective.
We introduce model risk and fitting error, which are derived from further decomposing the empirical risk.
arXiv Detail & Related papers (2024-06-07T01:07:35Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - On the Variance, Admissibility, and Stability of Empirical Risk
Minimization [80.26309576810844]
Empirical Risk Minimization (ERM) with squared loss may attain minimax suboptimal error rates.
We show that under mild assumptions, the suboptimality of ERM must be due to large bias rather than variance.
We also show that our estimates imply stability of ERM, complementing the main result of Caponnetto and Rakhlin (2006) for non-Donsker classes.
arXiv Detail & Related papers (2023-05-29T15:25:48Z) - Performative Prediction with Bandit Feedback: Learning through Reparameterization [23.039885534575966]
performative prediction is a framework for studying social prediction in which the data distribution itself changes in response to the deployment of a model.
We develop a reparametization that reparametrizes the performative prediction objective as a function of induced data distribution.
arXiv Detail & Related papers (2023-05-01T21:31:29Z) - Performative Prediction with Neural Networks [24.880495520422]
performative prediction is a framework for learning models that influence the data they intend to predict.
Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the model's parameters.
In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems.
arXiv Detail & Related papers (2023-04-14T01:12:48Z) - Improving Generalization via Uncertainty Driven Perturbations [107.45752065285821]
We consider uncertainty-driven perturbations of the training data points.
Unlike loss-driven perturbations, uncertainty-guided perturbations do not cross the decision boundary.
We show that UDP is guaranteed to achieve the robustness margin decision on linear models.
arXiv Detail & Related papers (2022-02-11T16:22:08Z) - Approximate Regions of Attraction in Learning with Decision-Dependent
Distributions [11.304363655760513]
We analyze repeated risk minimization as the trajectories of the gradient flows of performative risk minimization.
We provide conditions to characterize the region of attraction for the various equilibria in this setting.
We introduce the notion of performative alignment, which provides a geometric condition on the convergence of repeated risk minimization to performative risk minimizers.
arXiv Detail & Related papers (2021-06-30T18:38:08Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Outside the Echo Chamber: Optimizing the Performative Risk [21.62040119228266]
We identify a natural set of properties of the loss function and model-induced distribution shift under which the performative risk is convex.
We develop algorithms that leverage our structural assumptions to optimize the performative risk with better sample efficiency than generic methods for derivative-free convex optimization.
arXiv Detail & Related papers (2021-02-17T04:36:39Z) - The Risks of Invariant Risk Minimization [52.7137956951533]
Invariant Risk Minimization is an objective based on the idea for learning deep, invariant features of data.
We present the first analysis of classification under the IRM objective--as well as these recently proposed alternatives--under a fairly natural and general model.
We show that IRM can fail catastrophically unless the test data are sufficiently similar to the training distribution--this is precisely the issue that it was intended to solve.
arXiv Detail & Related papers (2020-10-12T14:54:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.