Behavioral Machine Learning? Computer Predictions of Corporate Earnings also Overreact
- URL: http://arxiv.org/abs/2303.16158v2
- Date: Fri, 14 Mar 2025 02:54:43 GMT
- Title: Behavioral Machine Learning? Computer Predictions of Corporate Earnings also Overreact
- Authors: Murray Z. Frank, Jing Gao, Keer Yang,
- Abstract summary: We show that leading methods systematically overreact to news.<n>Analysts with machine learning training overreact much less than do traditional analysts.<n>Our findings suggest that AI tools reduce but do not eliminate behavioral biases in financial markets.
- Score: 5.92470368943469
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning algorithms are known to outperform human analysts in predicting corporate earnings, leading to their rapid adoption. However, we show that leading methods (XGBoost, neural nets, ChatGPT) systematically overreact to news. The overreaction is primarily due to biases in the training data and we show that it cannot be eliminated without compromising accuracy. Analysts with machine learning training overreact much less than do traditional analysts. We provide a model showing that there is a tradeoff between predictive power and rational behavior. Our findings suggest that AI tools reduce but do not eliminate behavioral biases in financial markets.
Related papers
- Price predictability in limit order book with deep learning model [0.0]
This study explores the prediction of high-frequency price changes using deep learning models.
We found that an inadequately defined target price process may render predictions meaningless by incorporating past information.
arXiv Detail & Related papers (2024-09-21T14:40:13Z) - Beyond Trend Following: Deep Learning for Market Trend Prediction [49.89480853499917]
We advocate for the use of Artificial Intelligence and Machine Learning techniques to predict future market trends.
These predictions, when done properly, can improve the performance of asset managers by increasing returns and reducing drawdowns.
arXiv Detail & Related papers (2024-06-10T11:42:30Z) - A Systematic Bias of Machine Learning Regression Models and Its Correction: an Application to Imaging-based Brain Age Prediction [2.4894581801802227]
Machine learning models for continuous outcomes often yield systematically biased predictions.
Predictions for large-valued outcomes tend to be negatively biased (underestimating actual values)
Those for small-valued outcomes are positively biased (overestimating actual values)
arXiv Detail & Related papers (2024-05-24T21:34:16Z) - Ecosystem-level Analysis of Deployed Machine Learning Reveals Homogeneous Outcomes [72.13373216644021]
We study the societal impact of machine learning by considering the collection of models that are deployed in a given context.
We find deployed machine learning is prone to systemic failure, meaning some users are exclusively misclassified by all models available.
These examples demonstrate ecosystem-level analysis has unique strengths for characterizing the societal impact of machine learning.
arXiv Detail & Related papers (2023-07-12T01:11:52Z) - Does Machine Learning Amplify Pricing Errors in the Housing Market? --
The Economics of Machine Learning Feedback Loops [2.5699371511994777]
We develop an analytical model of machine learning feedback loops in the context of the housing market.
We show that feedback loops lead machine learning algorithms to become overconfident in their own accuracy.
We then identify conditions where the economic payoffs for home sellers at the feedback loop equilibrium is worse off than no machine learning.
arXiv Detail & Related papers (2023-02-18T23:20:57Z) - Prediction-Powered Inference [68.97619568620709]
Prediction-powered inference is a framework for performing valid statistical inference when an experimental dataset is supplemented with predictions from a machine-learning system.
The framework yields simple algorithms for computing provably valid confidence intervals for quantities such as means, quantiles, and linear and logistic regression coefficients.
Prediction-powered inference could enable researchers to draw valid and more data-efficient conclusions using machine learning.
arXiv Detail & Related papers (2023-01-23T18:59:28Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - Learning Predictions for Algorithms with Predictions [49.341241064279714]
We introduce a general design approach for algorithms that learn predictors.
We apply techniques from online learning to learn against adversarial instances, tune robustness-consistency trade-offs, and obtain new statistical guarantees.
We demonstrate the effectiveness of our approach at deriving learning algorithms by analyzing methods for bipartite matching, page migration, ski-rental, and job scheduling.
arXiv Detail & Related papers (2022-02-18T17:25:43Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Adversarial Training is Not Ready for Robot Learning [55.493354071227174]
Adversarial training is an effective method to train deep learning models that are resilient to norm-bounded perturbations.
We show theoretically and experimentally that neural controllers obtained via adversarial training are subjected to three types of defects.
Our results suggest that adversarial training is not yet ready for robot learning.
arXiv Detail & Related papers (2021-03-15T07:51:31Z) - Biased Programmers? Or Biased Data? A Field Experiment in
Operationalizing AI Ethics [6.946103498518291]
We evaluate 8.2 million algorithmic predictions of math performance from $approx$400 AI engineers.
We find that biased predictions are mostly caused by biased training data.
One-third of the benefit of better training data comes through a novel economic mechanism.
arXiv Detail & Related papers (2020-12-04T04:12:33Z) - Taking Over the Stock Market: Adversarial Perturbations Against
Algorithmic Traders [47.32228513808444]
We present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques.
We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points.
arXiv Detail & Related papers (2020-10-19T06:28:05Z) - Competing AI: How does competition feedback affect machine learning? [14.350250426090893]
We show that competition causes predictors to specialize for specific sub-populations at the cost of worse performance over the general population.
We show that having too few or too many competing predictors in a market can hurt the overall prediction quality.
arXiv Detail & Related papers (2020-09-15T00:13:32Z) - Capturing dynamics of post-earnings-announcement drift using genetic
algorithm-optimised supervised learning [3.42658286826597]
Post-Earnings-Announcement Drift (PEAD) is one of the most studied stock market anomalies.
We use a machine learning based approach instead, and aim to capture the PEAD dynamics using data from a large group of stocks.
arXiv Detail & Related papers (2020-09-07T13:27:06Z) - How to "Improve" Prediction Using Behavior Modification [0.0]
Data science researchers design algorithms, models, and approaches to improve prediction.
Predictive accuracy is improved with larger and richer data.
platforms can stealthily achieve better prediction accuracy by pushing users' behaviors towards their predicted values.
Our derivation elucidates implications of such behavior modification to data scientists, platforms, their customers, and the humans whose behavior is manipulated.
arXiv Detail & Related papers (2020-08-26T12:39:35Z) - Vulnerability Under Adversarial Machine Learning: Bias or Variance? [77.30759061082085]
We investigate the effect of adversarial machine learning on the bias and variance of a trained deep neural network.
Our analysis sheds light on why the deep neural networks have poor performance under adversarial perturbation.
We introduce a new adversarial machine learning algorithm with lower computational complexity than well-known adversarial machine learning strategies.
arXiv Detail & Related papers (2020-08-01T00:58:54Z) - Malicious Experts versus the multiplicative weights algorithm in online
prediction [85.62472761361107]
We consider a prediction problem with two experts and a forecaster.
We assume that one of the experts is honest and makes correct prediction with probability $mu$ at each round.
The other one is malicious, who knows true outcomes at each round and makes predictions in order to maximize the loss of the forecaster.
arXiv Detail & Related papers (2020-03-18T20:12:08Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.