Integrating LLM-Generated Views into Mean-Variance Optimization Using the Black-Litterman Model
- URL: http://arxiv.org/abs/2504.14345v1
- Date: Sat, 19 Apr 2025 16:26:14 GMT
- Title: Integrating LLM-Generated Views into Mean-Variance Optimization Using the Black-Litterman Model
- Authors: Youngbin Lee, Yejin Kim, Suin Kim, Yongjae Lee,
- Abstract summary: This study explores the integration of large language models (LLMs) generated views into portfolio optimization using the Black-Litterman framework.<n>Our method leverages LLMs to estimate expected stock returns from historical prices and company metadata, incorporating uncertainty through the variance in predictions.<n> Empirical results suggest that different LLMs exhibit varying levels of optimism predictive and confidence stability, which impact portfolio performance.
- Score: 27.512468160410588
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Portfolio optimization faces challenges due to the sensitivity in traditional mean-variance models. The Black-Litterman model mitigates this by integrating investor views, but defining these views remains difficult. This study explores the integration of large language models (LLMs) generated views into portfolio optimization using the Black-Litterman framework. Our method leverages LLMs to estimate expected stock returns from historical prices and company metadata, incorporating uncertainty through the variance in predictions. We conduct a backtest of the LLM-optimized portfolios from June 2024 to February 2025, rebalancing biweekly using the previous two weeks of price data. As baselines, we compare against the S&P 500, an equal-weighted portfolio, and a traditional mean-variance optimized portfolio constructed using the same set of stocks. Empirical results suggest that different LLMs exhibit varying levels of predictive optimism and confidence stability, which impact portfolio performance. The source code and data are available at https://github.com/youngandbin/LLM-MVO-BLM.
Related papers
- Preference Leakage: A Contamination Problem in LLM-as-a-judge [69.96778498636071]
Large Language Models (LLMs) as judges and LLM-based data synthesis have emerged as two fundamental LLM-driven data annotation methods.<n>In this work, we expose preference leakage, a contamination problem in LLM-as-a-judge caused by the relatedness between the synthetic data generators and LLM-based evaluators.
arXiv Detail & Related papers (2025-02-03T17:13:03Z) - Dynamic Uncertainty Ranking: Enhancing Retrieval-Augmented In-Context Learning for Long-Tail Knowledge in LLMs [50.29035873837]
Large language models (LLMs) can learn vast amounts of knowledge from diverse domains during pre-training.<n>Long-tail knowledge from specialized domains is often scarce and underrepresented, rarely appearing in the models' memorization.<n>We propose a reinforcement learning-based dynamic uncertainty ranking method for ICL that accounts for the varying impact of each retrieved sample on LLM predictions.
arXiv Detail & Related papers (2024-10-31T03:42:17Z) - Margin Matching Preference Optimization: Enhanced Model Alignment with Granular Feedback [64.67540769692074]
Large language models (LLMs) fine-tuned with alignment techniques, such as reinforcement learning from human feedback, have been instrumental in developing some of the most capable AI systems to date.
We introduce an approach called Margin Matching Preference Optimization (MMPO), which incorporates relative quality margins into optimization, leading to improved LLM policies and reward models.
Experiments with both human and AI feedback data demonstrate that MMPO consistently outperforms baseline methods, often by a substantial margin, on popular benchmarks including MT-bench and RewardBench.
arXiv Detail & Related papers (2024-10-04T04:56:11Z) - Social Debiasing for Fair Multi-modal LLMs [55.8071045346024]
Multi-modal Large Language Models (MLLMs) have advanced significantly, offering powerful vision-language understanding capabilities.
However, these models often inherit severe social biases from their training datasets, leading to unfair predictions based on attributes like race and gender.
This paper addresses the issue of social biases in MLLMs by i) Introducing a comprehensive Counterfactual dataset with Multiple Social Concepts (CMSC) and ii) Proposing an Anti-Stereotype Debiasing strategy (ASD)
arXiv Detail & Related papers (2024-08-13T02:08:32Z) - On Learning to Summarize with Large Language Models as References [101.79795027550959]
Large language models (LLMs) are favored by human annotators over the original reference summaries in commonly used summarization datasets.
We study an LLM-as-reference learning setting for smaller text summarization models to investigate whether their performance can be substantially improved.
arXiv Detail & Related papers (2023-05-23T16:56:04Z) - Optimizing Stock Option Forecasting with the Assembly of Machine
Learning Models and Improved Trading Strategies [9.553857741758742]
This paper introduced key aspects of applying Machine Learning (ML) models, improved trading strategies, and the Quasi-Reversibility Method (QRM) to optimize stock option forecasting and trading results.
arXiv Detail & Related papers (2022-11-29T04:01:16Z) - Robust Portfolio Design and Stock Price Prediction Using an Optimized
LSTM Model [0.0]
This paper presents a systematic approach towards building two types of portfolios, optimum risk, and eigen, for four critical economic sectors of India.
The prices of the stocks are extracted from the web from Jan 1, 2016, to Dec 31, 2020.
An LSTM model is also designed for predicting future stock prices.
arXiv Detail & Related papers (2022-03-02T14:15:14Z) - Portfolio Optimization on NIFTY Thematic Sector Stocks Using an LSTM
Model [0.0]
This paper presents an algorithmic approach for designing optimum risk and eigen portfolios for five thematic sectors of the NSE of India.
The prices of the stocks are extracted from the web from Jan 1, 2016, to Dec 31, 2020.
An LSTM model is designed for predicting future stock prices.
Seven months after the portfolios were formed, on Aug 3, 2021, the actual returns of the portfolios are compared with the LSTM-predicted returns.
arXiv Detail & Related papers (2022-02-06T07:41:20Z) - Stock Portfolio Optimization Using a Deep Learning LSTM Model [1.1470070927586016]
This work has carried out an analysis of the time series of the historical prices of the top five stocks from the nine different sectors of the Indian stock market from January 1, 2016, to December 31, 2020.
Optimum portfolios are built for each of these sectors.
The predicted and the actual returns of each portfolio are found to be high, indicating the high precision of the LSTM model.
arXiv Detail & Related papers (2021-11-08T18:41:49Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z) - Deep Learning for Portfolio Optimization [5.833272638548154]
Instead of selecting individual assets, we trade Exchange-Traded Funds (ETFs) of market indices to form a portfolio.
We compare our method with a wide range of algorithms with results showing that our model obtains the best performance over the testing period.
arXiv Detail & Related papers (2020-05-27T21:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.