How to "Improve" Prediction Using Behavior Modification
- URL: http://arxiv.org/abs/2008.12138v4
- Date: Sat, 23 Jul 2022 08:37:11 GMT
- Title: How to "Improve" Prediction Using Behavior Modification
- Authors: Galit Shmueli and Ali Tafti
- Abstract summary: Data science researchers design algorithms, models, and approaches to improve prediction.
Predictive accuracy is improved with larger and richer data.
platforms can stealthily achieve better prediction accuracy by pushing users' behaviors towards their predicted values.
Our derivation elucidates implications of such behavior modification to data scientists, platforms, their customers, and the humans whose behavior is manipulated.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many internet platforms that collect behavioral big data use it to predict
user behavior for internal purposes and for their business customers (e.g.,
advertisers, insurers, security forces, governments, political consulting
firms) who utilize the predictions for personalization, targeting, and other
decision-making. Improving predictive accuracy is therefore extremely valuable.
Data science researchers design algorithms, models, and approaches to improve
prediction. Prediction is also improved with larger and richer data. Beyond
improving algorithms and data, platforms can stealthily achieve better
prediction accuracy by pushing users' behaviors towards their predicted values,
using behavior modification techniques, thereby demonstrating more certain
predictions. Such apparent "improved" prediction can result from employing
reinforcement learning algorithms that combine prediction and behavior
modification. This strategy is absent from the machine learning and statistics
literature. Investigating its properties requires integrating causal with
predictive notation. To this end, we incorporate Pearl's causal do(.) operator
into the predictive vocabulary. We then decompose the expected prediction error
given behavior modification, and identify the components impacting predictive
power. Our derivation elucidates implications of such behavior modification to
data scientists, platforms, their customers, and the humans whose behavior is
manipulated. Behavior modification can make users' behavior more predictable
and even more homogeneous; yet this apparent predictability might not
generalize when business customers use predictions in practice. Outcomes pushed
towards their predictions can be at odds with customers' intentions, and
harmful to manipulated users.
Related papers
- Performative Prediction on Games and Mechanism Design [69.7933059664256]
We study a collective risk dilemma where agents decide whether to trust predictions based on past accuracy.
As predictions shape collective outcomes, social welfare arises naturally as a metric of concern.
We show how to achieve better trade-offs and use them for mechanism design.
arXiv Detail & Related papers (2024-08-09T16:03:44Z) - Best of Many in Both Worlds: Online Resource Allocation with Predictions under Unknown Arrival Model [16.466711636334587]
Online decision-makers often obtain predictions on future variables, such as arrivals, demands, and so on.
Prediction accuracy is unknown to decision-makers a priori, hence blindly following the predictions can be harmful.
We develop algorithms that utilize predictions in a manner that is robust to the unknown prediction accuracy.
arXiv Detail & Related papers (2024-02-21T04:57:32Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - What Should I Know? Using Meta-gradient Descent for Predictive Feature
Discovery in a Single Stream of Experience [63.75363908696257]
computational reinforcement learning seeks to construct an agent's perception of the world through predictions of future sensations.
An open challenge in this line of work is determining from the infinitely many predictions that the agent could possibly make which predictions might best support decision-making.
We introduce a meta-gradient descent process by which an agent learns what predictions to make, 2) the estimates for its chosen predictions, and 3) how to use those estimates to generate policies that maximize future reward.
arXiv Detail & Related papers (2022-06-13T21:31:06Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Competing AI: How does competition feedback affect machine learning? [14.350250426090893]
We show that competition causes predictors to specialize for specific sub-populations at the cost of worse performance over the general population.
We show that having too few or too many competing predictors in a market can hurt the overall prediction quality.
arXiv Detail & Related papers (2020-09-15T00:13:32Z) - Measuring Forecasting Skill from Text [15.795144936579627]
We explore connections between the language people use to describe their predictions and their forecasting skill.
We present a number of linguistic metrics which are computed over text associated with people's predictions about the future.
We demonstrate that it is possible to accurately predict forecasting skill using a model that is based solely on language.
arXiv Detail & Related papers (2020-06-12T19:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.