Forecasting Future Language: Context Design for Mention Markets
- URL: http://arxiv.org/abs/2602.21229v1
- Date: Wed, 04 Feb 2026 12:43:31 GMT
- Title: Forecasting Future Language: Context Design for Mention Markets
- Authors: Sumin Kim, Jihoon Kwon, Yoon Kim, Nicole Kagan, Raffi Khatchadourian, Wonbin Ahn, Alejandro Lopez-Lira, Jaewon Lee, Yoontae Hwang, Oscar Levy, Yongjae Lee, Chanyeol Choi,
- Abstract summary: We study how input context should be designed to support accurate prediction in mention markets.<n>We find three insights: (1) richer context consistently improves forecasting performance; (2) market-conditioned prompting (MCP) treats the market probability as a prior and updates it using textual evidence, yields better-calibrated forecasts; and (3) a mixture of the market probability and MCP (MixMCP) outperforms the market baseline.
- Score: 81.25011140991566
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Mention markets, a type of prediction market in which contracts resolve based on whether a specified keyword is mentioned during a future public event, require accurate probabilistic forecasts of keyword-mention outcomes. While recent work shows that large language models (LLMs) can generate forecasts competitive with human forecasters, it remains unclear how input context should be designed to support accurate prediction. In this paper, we study this question through experiments on earnings-call mention markets, which require forecasting whether a company will mention a specified keyword during its upcoming call. We run controlled comparisons varying (i) which contextual information is provided (news and/or prior earnings-call transcripts) and (ii) how \textit{market probability}, (i.e., prediction market contract price) is used. We introduce Market-Conditioned Prompting (MCP), which explicitly treats the market-implied probability as a prior and instructs the LLM to update this prior using textual evidence, rather than re-predicting the base rate from scratch. In our experiments, we find three insights: (1) richer context consistently improves forecasting performance; (2) market-conditioned prompting (MCP), which treats the market probability as a prior and updates it using textual evidence, yields better-calibrated forecasts; and (3) a mixture of the market probability and MCP (MixMCP) outperforms the market baseline. By dampening the LLM's posterior update with the market prior, MixMCP yields more robust predictions than either the market or the LLM alone.
Related papers
- Prediction Markets with Intermittent Contributions [2.7429630700600893]
We place ourselves in a more general framework, based on prediction markets.<n>There, independent agents trade forecasts of uncertain future events in exchange for rewards.<n>We introduce and analyse a prediction market that (i) accounts for the historical performance of the agents, (ii) adapts to time-varying conditions, while (iii) permitting agents to enter and exit the market at will.
arXiv Detail & Related papers (2025-10-15T10:23:28Z) - Deriving Strategic Market Insights with Large Language Models: A Benchmark for Forward Counterfactual Generation [55.2788567621326]
We introduce a novel benchmark, FIN-FORCE-FINancial FORward Counterfactual Evaluation.<n>By curating financial news headlines, FIN-FORCE supports LLM based forward counterfactual generation.<n>This paves the way for scalable and automated solutions for exploring and anticipating future market developments.
arXiv Detail & Related papers (2025-05-26T02:41:50Z) - Market-Derived Financial Sentiment Analysis: Context-Aware Language Models for Crypto Forecasting [0.15833270109954134]
We propose a market-derived labeling approach to assign tweet labels based on ensuing short-term price trends.<n>A domain-specific language model was fine-tuned on these labels, achieving up to an 11% improvement in short-term trend prediction accuracy.<n>Our findings demonstrate that language models can serve as effective short-term market predictors.
arXiv Detail & Related papers (2025-02-17T21:35:18Z) - Conformal Prediction for Electricity Price Forecasting in the Day-Ahead and Real-Time Balancing Market [0.0]
integration of renewable energy into electricity markets poses significant challenges to price stability.<n>This study explores the enhancement of probabilistic price prediction using Conformal Prediction (CP) techniques.<n>We propose an ensemble approach that combines the efficiency of quantile regression models with the robust coverage properties of time series adapted CP techniques.
arXiv Detail & Related papers (2025-02-07T13:57:47Z) - Consistency Checks for Language Model Forecasters [54.62507816753479]
We measure the performance of forecasters in terms of the consistency of their predictions on different logically-related questions.<n>We build an automated evaluation system that generates a set of base questions, instantiates consistency checks from these questions, elicits predictions of the forecaster, and measures the consistency of the predictions.
arXiv Detail & Related papers (2024-12-24T16:51:35Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [48.87381259980254]
We document the capability of large language models (LLMs) like ChatGPT to predict stock market reactions from news headlines without direct financial training.<n>Using post-knowledge-cutoff headlines, GPT-4 captures initial market responses, achieving approximately 90% portfolio-day hit rates for the non-tradable initial reaction.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Models, Markets, and the Forecasting of Elections [3.8138805042090325]
We find systematic differences in accuracy over time, with markets performing better several months before the election, and the model performing better as the election approached.
We propose a market design that incorporates model forecasts via a trading bot to generate synthetic predictions.
arXiv Detail & Related papers (2021-02-06T19:05:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.