Behavioral Machine Learning? Computer Predictions of Corporate Earnings
also Overreact
- URL: http://arxiv.org/abs/2303.16158v1
- Date: Sat, 25 Mar 2023 03:06:43 GMT
- Title: Behavioral Machine Learning? Computer Predictions of Corporate Earnings
also Overreact
- Authors: Murray Z. Frank, Jing Gao, Keer Yang
- Abstract summary: We study the predictions of corporate earnings from several algorithms, notably linear regressions and a popular algorithm called Gradient Boosted Regression Trees (GBRT)
On average, GBRT outperformed both linear regressions and human stock analysts, but it still overreacted to news and did not satisfy rational expectation as normally defined.
Human stock analysts who have been trained in machine learning methods overreact less than traditionally trained analysts.
- Score: 9.566303741482468
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: There is considerable evidence that machine learning algorithms have better
predictive abilities than humans in various financial settings. But, the
literature has not tested whether these algorithmic predictions are more
rational than human predictions. We study the predictions of corporate earnings
from several algorithms, notably linear regressions and a popular algorithm
called Gradient Boosted Regression Trees (GBRT). On average, GBRT outperformed
both linear regressions and human stock analysts, but it still overreacted to
news and did not satisfy rational expectation as normally defined. By reducing
the learning rate, the magnitude of overreaction can be minimized, but it comes
with the cost of poorer out-of-sample prediction accuracy. Human stock analysts
who have been trained in machine learning methods overreact less than
traditionally trained analysts. Additionally, stock analyst predictions reflect
information not otherwise available to machine algorithms.
Related papers
- Price predictability in limit order book with deep learning model [0.0]
This study explores the prediction of high-frequency price changes using deep learning models.
We found that an inadequately defined target price process may render predictions meaningless by incorporating past information.
arXiv Detail & Related papers (2024-09-21T14:40:13Z) - A Systematic Bias of Machine Learning Regression Models and Its Correction: an Application to Imaging-based Brain Age Prediction [2.4894581801802227]
Machine learning models for continuous outcomes often yield systematically biased predictions.
Predictions for large-valued outcomes tend to be negatively biased (underestimating actual values)
Those for small-valued outcomes are positively biased (overestimating actual values)
arXiv Detail & Related papers (2024-05-24T21:34:16Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics [6.946103498518291]
We evaluate 8.2 million algorithmic predictions of math performance from $approx$400 AI engineers.
We find that biased predictions are mostly caused by biased training data.
One-third of the benefit of better training data comes through a novel economic mechanism.
arXiv Detail & Related papers (2020-12-04T04:12:33Z) - Competing AI: How does competition feedback affect machine learning? [14.350250426090893]
We show that competition causes predictors to specialize for specific sub-populations at the cost of worse performance over the general population.
We show that having too few or too many competing predictors in a market can hurt the overall prediction quality.
arXiv Detail & Related papers (2020-09-15T00:13:32Z) - How to "Improve" Prediction Using Behavior Modification [0.0]
Data science researchers design algorithms, models, and approaches to improve prediction.
Predictive accuracy is improved with larger and richer data.
platforms can stealthily achieve better prediction accuracy by pushing users' behaviors towards their predicted values.
Our derivation elucidates implications of such behavior modification to data scientists, platforms, their customers, and the humans whose behavior is manipulated.
arXiv Detail & Related papers (2020-08-26T12:39:35Z) - Malicious Experts versus the multiplicative weights algorithm in online
prediction [85.62472761361107]
We consider a prediction problem with two experts and a forecaster.
We assume that one of the experts is honest and makes correct prediction with probability $mu$ at each round.
The other one is malicious, who knows true outcomes at each round and makes predictions in order to maximize the loss of the forecaster.
arXiv Detail & Related papers (2020-03-18T20:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.