Performative Prediction: Past and Future
- URL: http://arxiv.org/abs/2310.16608v2
- Date: Tue, 20 May 2025 09:38:03 GMT
- Title: Performative Prediction: Past and Future
- Authors: Moritz Hardt, Celestine Mendler-Dünner,
- Abstract summary: We discuss the recently founded area of performative prediction that provides a conceptual framework to study performativity in machine learning.<n>A key element of performative prediction is a natural equilibrium notion that gives rise to new optimization challenges.<n>What emerges is a distinction between learning and steering, two mechanisms at play in performative prediction.
- Score: 26.725583537576462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictions in the social world generally influence the target of prediction, a phenomenon known as performativity. Self-fulfilling and self-negating predictions are examples of performativity. Of fundamental importance to economics, finance, and the social sciences, the notion has been absent from the development of machine learning that builds on the static perspective of pattern recognition. In machine learning applications, however, performativity often surfaces as distribution shift. A predictive model deployed on a digital platform, for example, influences behavior and thereby changes the data-generating distribution. We discuss the recently founded area of performative prediction that provides a definition and conceptual framework to study performativity in machine learning. A key element of performative prediction is a natural equilibrium notion that gives rise to new optimization challenges. What emerges is a distinction between learning and steering, two mechanisms at play in performative prediction. Steering is in turn intimately related to questions of power in digital markets. The notion of performative power that we review gives an answer to the question how much a platform can steer participants through its predictions. We end on a discussion of future directions, such as the role that performativity plays in contesting algorithmic systems.
Related papers
- Statistical Inference under Performativity [12.935979571180464]
We build a central limit theorem for estimation and inference under performativity.<n>We investigate prediction-powered inference (PPI) under performativity based on a small labeled dataset and a much larger dataset of machine-learning predictions.<n>To the best of our knowledge, this paper is the first one to establish statistical inference under performativity.
arXiv Detail & Related papers (2025-05-24T03:59:49Z) - Revisiting the Predictability of Performative, Social Events [7.170441928038049]
We show that one can always efficiently predict social events accurately, regardless of how predictions influence data.
While achievable, we also show that these predictions are often undesirable, highlighting the limitations of previous desiderata.
arXiv Detail & Related papers (2025-03-12T22:19:33Z) - Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - The Relative Value of Prediction in Algorithmic Decision Making [0.0]
We ask: What is the relative value of prediction in algorithmic decision making?
We identify simple, sharp conditions determining the relative value of prediction vis-a-vis expanding access.
We illustrate how these theoretical insights may be used to guide the design of algorithmic decision making systems in practice.
arXiv Detail & Related papers (2023-12-13T20:52:45Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Explaining Hate Speech Classification with Model Agnostic Methods [0.9990687944474738]
The research goal of this paper is to bridge the gap between hate speech prediction and the explanations generated by the system to support its decision.
This has been achieved by first predicting the classification of a text and then providing a posthoc, model agnostic and surrogate interpretability approach.
arXiv Detail & Related papers (2023-05-30T19:52:56Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Rationalizing Predictions by Adversarial Information Calibration [65.19407304154177]
We train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction.
We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
arXiv Detail & Related papers (2023-01-15T03:13:09Z) - Stochastic Future Prediction in Real World Driving Scenarios [0.0]
A future prediction method should cover the whole possibilities to be robust.
In autonomous driving, covering multiple modes in the prediction part is crucially important to make safety-critical decisions.
We propose solutions by modeling the motion explicitly in a way and learning the temporal dynamics in a latent space.
arXiv Detail & Related papers (2022-09-21T22:34:31Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - A-ACT: Action Anticipation through Cycle Transformations [89.83027919085289]
We take a step back to analyze how the human capability to anticipate the future can be transferred to machine learning algorithms.
A recent study on human psychology explains that, in anticipating an occurrence, the human brain counts on both systems.
In this work, we study the impact of each system for the task of action anticipation and introduce a paradigm to integrate them in a learning framework.
arXiv Detail & Related papers (2022-04-02T21:50:45Z) - Patterns, predictions, and actions: A story about machine learning [59.32629659530159]
This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions.
Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences.
arXiv Detail & Related papers (2021-02-10T03:42:03Z) - Explainable Artificial Intelligence: How Subsets of the Training Data
Affect a Prediction [2.3204178451683264]
We propose a novel methodology which we call Shapley values for training data subset importance.
We show how the proposed explanations can be used to reveal biasedness in models and erroneous training data.
We argue that the explanations enable us to perceive more of the inner workings of the algorithms, and illustrate how models producing similar predictions can be based on very different parts of the training data.
arXiv Detail & Related papers (2020-12-07T12:15:47Z) - A Review on Deep Learning Techniques for Video Prediction [3.203688549673373]
The ability to predict, anticipate and reason about future outcomes is a key component of intelligent decision-making systems.
Deep learning-based video prediction emerged as a promising research direction.
arXiv Detail & Related papers (2020-04-10T19:58:44Z) - Performative Prediction [31.876692592395777]
We develop a framework for performative prediction bringing together concepts from statistics, game theory, and causality.
A conceptual novelty is an equilibrium notion we call performative stability.
Our main results are necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss.
arXiv Detail & Related papers (2020-02-16T20:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.