Competing AI: How does competition feedback affect machine learning?
- URL: http://arxiv.org/abs/2009.06797v4
- Date: Thu, 25 Mar 2021 04:04:22 GMT
- Title: Competing AI: How does competition feedback affect machine learning?
- Authors: Antonio Ginart, Eva Zhang, Yongchan Kwon, James Zou
- Abstract summary: We show that competition causes predictors to specialize for specific sub-populations at the cost of worse performance over the general population.
We show that having too few or too many competing predictors in a market can hurt the overall prediction quality.
- Score: 14.350250426090893
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This papers studies how competition affects machine learning (ML) predictors.
As ML becomes more ubiquitous, it is often deployed by companies to compete
over customers. For example, digital platforms like Yelp use ML to predict user
preference and make recommendations. A service that is more often queried by
users, perhaps because it more accurately anticipates user preferences, is also
more likely to obtain additional user data (e.g. in the form of a Yelp review).
Thus, competing predictors cause feedback loops whereby a predictor's
performance impacts what training data it receives and biases its predictions
over time. We introduce a flexible model of competing ML predictors that
enables both rapid experimentation and theoretical tractability. We show with
empirical and mathematical analysis that competition causes predictors to
specialize for specific sub-populations at the cost of worse performance over
the general population. We further analyze the impact of predictor
specialization on the overall prediction quality experienced by users. We show
that having too few or too many competing predictors in a market can hurt the
overall prediction quality. Our theory is complemented by experiments on
several real datasets using popular learning algorithms, such as neural
networks and nearest neighbor methods.
Related papers
- New User Event Prediction Through the Lens of Causal Inference [20.676353189313737]
We propose a novel discrete event prediction framework for new users.
Our method offers an unbiased prediction for new users without needing to know their categories.
We demonstrate the superior performance of the proposed framework with a numerical simulation study and two real-world applications.
arXiv Detail & Related papers (2024-07-08T05:35:54Z) - Performative Time-Series Forecasting [71.18553214204978]
We formalize performative time-series forecasting (PeTS) from a machine-learning perspective.
We propose a novel approach, Feature Performative-Shifting (FPS), which leverages the concept of delayed response to anticipate distribution shifts.
We conduct comprehensive experiments using multiple time-series models on COVID-19 and traffic forecasting tasks.
arXiv Detail & Related papers (2023-10-09T18:34:29Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - Improved Bayes Risk Can Yield Reduced Social Welfare Under Competition [99.7047087527422]
In this work, we demonstrate that competition can fundamentally alter the behavior of machine learning scaling trends.
We find many settings where improving data representation quality decreases the overall predictive accuracy across users.
At a conceptual level, our work suggests that favorable scaling trends for individual model-providers need not translate to downstream improvements in social welfare.
arXiv Detail & Related papers (2023-06-26T13:06:34Z) - Incorporating Experts' Judgment into Machine Learning Models [2.5363839239628843]
In some cases, domain experts might have a judgment about the expected outcome that might conflict with the prediction of machine learning models.
We present a novel framework that aims at leveraging experts' judgment to mitigate the conflict.
arXiv Detail & Related papers (2023-04-24T07:32:49Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Competition over data: how does data purchase affect users? [15.644822986029377]
We study what happens when the competing predictors can acquire additional labeled data to improve their prediction quality.
We show that this phenomenon naturally arises due to a trade-off whereby competition pushes each predictor to specialize in a subset of the population.
arXiv Detail & Related papers (2022-01-26T06:44:55Z) - Test-time Collective Prediction [73.74982509510961]
Multiple parties in machine learning want to jointly make predictions on future test points.
Agents wish to benefit from the collective expertise of the full set of agents, but may not be willing to release their data or model parameters.
We explore a decentralized mechanism to make collective predictions at test time, leveraging each agent's pre-trained model.
arXiv Detail & Related papers (2021-06-22T18:29:58Z) - How to "Improve" Prediction Using Behavior Modification [0.0]
Data science researchers design algorithms, models, and approaches to improve prediction.
Predictive accuracy is improved with larger and richer data.
platforms can stealthily achieve better prediction accuracy by pushing users' behaviors towards their predicted values.
Our derivation elucidates implications of such behavior modification to data scientists, platforms, their customers, and the humans whose behavior is manipulated.
arXiv Detail & Related papers (2020-08-26T12:39:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.