Performative Prediction: Past and Future
- URL: http://arxiv.org/abs/2310.16608v1
- Date: Wed, 25 Oct 2023 13:02:45 GMT
- Title: Performative Prediction: Past and Future
- Authors: Moritz Hardt and Celestine Mendler-D\"unner
- Abstract summary: Self-fulfilling and self-negating predictions are examples of performativity.
In machine learning applications, performativity often surfaces as distribution shift.
A consequence of performative prediction is a natural equilibrium notion that gives rise to new optimization challenges.
- Score: 20.177988776870517
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predictions in the social world generally influence the target of prediction,
a phenomenon known as performativity. Self-fulfilling and self-negating
predictions are examples of performativity. Of fundamental importance to
economics, finance, and the social sciences, the notion has been absent from
the development of machine learning. In machine learning applications,
performativity often surfaces as distribution shift. A predictive model
deployed on a digital platform, for example, influences consumption and thereby
changes the data-generating distribution. We survey the recently founded area
of performative prediction that provides a definition and conceptual framework
to study performativity in machine learning. A consequence of performative
prediction is a natural equilibrium notion that gives rise to new optimization
challenges. Another consequence is a distinction between learning and steering,
two mechanisms at play in performative prediction. The notion of steering is in
turn intimately related to questions of power in digital markets. We review the
notion of performative power that gives an answer to the question how much a
platform can steer participants through its predictions. We end on a discussion
of future directions, such as the role that performativity plays in contesting
algorithmic systems.
Related papers
- The Relative Value of Prediction in Algorithmic Decision Making [0.0]
We ask: What is the relative value of prediction in algorithmic decision making?
We identify simple, sharp conditions determining the relative value of prediction vis-a-vis expanding access.
We illustrate how these theoretical insights may be used to guide the design of algorithmic decision making systems in practice.
arXiv Detail & Related papers (2023-12-13T20:52:45Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Rationalizing Predictions by Adversarial Information Calibration [65.19407304154177]
We train two models jointly: one is a typical neural model that solves the task at hand in an accurate but black-box manner, and the other is a selector-predictor model that additionally produces a rationale for its prediction.
We use an adversarial technique to calibrate the information extracted by the two models such that the difference between them is an indicator of the missed or over-selected features.
arXiv Detail & Related papers (2023-01-15T03:13:09Z) - Stochastic Future Prediction in Real World Driving Scenarios [0.0]
A future prediction method should cover the whole possibilities to be robust.
In autonomous driving, covering multiple modes in the prediction part is crucially important to make safety-critical decisions.
We propose solutions by modeling the motion explicitly in a way and learning the temporal dynamics in a latent space.
arXiv Detail & Related papers (2022-09-21T22:34:31Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Patterns, predictions, and actions: A story about machine learning [59.32629659530159]
This graduate textbook on machine learning tells a story of how patterns in data support predictions and consequential actions.
Self-contained introductions to causality, the practice of causal inference, sequential decision making, and reinforcement learning equip the reader with concepts and tools to reason about actions and their consequences.
arXiv Detail & Related papers (2021-02-10T03:42:03Z) - Explainable Artificial Intelligence: How Subsets of the Training Data
Affect a Prediction [2.3204178451683264]
We propose a novel methodology which we call Shapley values for training data subset importance.
We show how the proposed explanations can be used to reveal biasedness in models and erroneous training data.
We argue that the explanations enable us to perceive more of the inner workings of the algorithms, and illustrate how models producing similar predictions can be based on very different parts of the training data.
arXiv Detail & Related papers (2020-12-07T12:15:47Z) - A Review on Deep Learning Techniques for Video Prediction [3.203688549673373]
The ability to predict, anticipate and reason about future outcomes is a key component of intelligent decision-making systems.
Deep learning-based video prediction emerged as a promising research direction.
arXiv Detail & Related papers (2020-04-10T19:58:44Z) - Performative Prediction [31.876692592395777]
We develop a framework for performative prediction bringing together concepts from statistics, game theory, and causality.
A conceptual novelty is an equilibrium notion we call performative stability.
Our main results are necessary and sufficient conditions for the convergence of retraining to a performatively stable point of nearly minimal loss.
arXiv Detail & Related papers (2020-02-16T20:29:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.