Performative Prediction in a Stateful World
- URL: http://arxiv.org/abs/2011.03885v3
- Date: Wed, 23 Feb 2022 00:25:18 GMT
- Title: Performative Prediction in a Stateful World
- Authors: Gavin Brown, Shlomi Hod, Iden Kalemaj
- Abstract summary: Deployed machine learning models make predictions that interact with and influence the world.
It is an ongoing challenge to understand the influence of such predictions as well as design tools so as to control that influence.
We propose a theoretical framework where the response of a target population to the deployed classifier is modeled as a function of the classifier and the current state.
- Score: 0.3007949058551534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deployed supervised machine learning models make predictions that interact
with and influence the world. This phenomenon is called performative prediction
by Perdomo et al. (ICML 2020). It is an ongoing challenge to understand the
influence of such predictions as well as design tools so as to control that
influence. We propose a theoretical framework where the response of a target
population to the deployed classifier is modeled as a function of the
classifier and the current state (distribution) of the population. We show
necessary and sufficient conditions for convergence to an equilibrium of two
retraining algorithms, repeated risk minimization and a lazier variant.
Furthermore, convergence is near an optimal classifier. We thus generalize
results of Perdomo et al., whose performativity framework does not assume any
dependence on the state of the target population. A particular phenomenon
captured by our model is that of distinct groups that acquire information and
resources at different rates to be able to respond to the latest deployed
classifier. We study this phenomenon theoretically and empirically.
Related papers
- Advancing Counterfactual Inference through Nonlinear Quantile Regression [77.28323341329461]
We propose a framework for efficient and effective counterfactual inference implemented with neural networks.
The proposed approach enhances the capacity to generalize estimated counterfactual outcomes to unseen data.
Empirical results conducted on multiple datasets offer compelling support for our theoretical assertions.
arXiv Detail & Related papers (2023-06-09T08:30:51Z) - Generative Causal Representation Learning for Out-of-Distribution Motion
Forecasting [13.99348653165494]
We propose Generative Causal Learning Representation to facilitate knowledge transfer under distribution shifts.
While we evaluate the effectiveness of our proposed method in human trajectory prediction models, GCRL can be applied to other domains as well.
arXiv Detail & Related papers (2023-02-17T00:30:44Z) - Confidence and Dispersity Speak: Characterising Prediction Matrix for
Unsupervised Accuracy Estimation [51.809741427975105]
This work aims to assess how well a model performs under distribution shifts without using labels.
We use the nuclear norm that has been shown to be effective in characterizing both properties.
We show that the nuclear norm is more accurate and robust in accuracy than existing methods.
arXiv Detail & Related papers (2023-02-02T13:30:48Z) - Causal Forecasting:Generalization Bounds for Autoregressive Models [19.407531303870087]
We introduce the framework of *causal learning theory* for forecasting.
We obtain a characterization of the difference between statistical and causal risks.
This is the first work that provides theoretical guarantees for causal generalization in the time-series setting.
arXiv Detail & Related papers (2021-11-18T17:56:20Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - Predicting Deep Neural Network Generalization with Perturbation Response
Curves [58.8755389068888]
We propose a new framework for evaluating the generalization capabilities of trained networks.
Specifically, we introduce two new measures for accurately predicting generalization gaps.
We attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition.
arXiv Detail & Related papers (2021-06-09T01:37:36Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Estimating Generalization under Distribution Shifts via Domain-Invariant
Representations [75.74928159249225]
We use a set of domain-invariant predictors as a proxy for the unknown, true target labels.
The error of the resulting risk estimate depends on the target risk of the proxy model.
arXiv Detail & Related papers (2020-07-06T17:21:24Z) - Performative Prediction [31.876692592395777]
We develop a framework for performative prediction bringing together concepts from statistics, game theory, and causality.
A conceptual novelty is an equilibrium notion we call performative stability.
Our main results are necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss.
arXiv Detail & Related papers (2020-02-16T20:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.