Meta-Learning Reinforcement Learning for Crypto-Return Prediction
- URL: http://arxiv.org/abs/2509.09751v1
- Date: Thu, 11 Sep 2025 14:20:45 GMT
- Title: Meta-Learning Reinforcement Learning for Crypto-Return Prediction
- Authors: Junqiao Wang, Zhaoyang Guan, Guanyu Liu, Tianze Xia, Xianzhi Li, Shuo Yin, Xinyuan Song, Chuhan Cheng, Tianyu Shi, Alex Lee,
- Abstract summary: We present Meta-RL-Crypto, a unified transformer-based architecture that unifies meta-learning and reinforcement learning.<n>The agent iteratively alternates between three roles-actor, judge, and meta-judge-in a closed-loop architecture.<n>Experiments across diverse market regimes demonstrate that Meta-RL-Crypto shows good performance on the technical indicators of the real market.
- Score: 16.344249366257003
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting cryptocurrency returns is notoriously difficult: price movements are driven by a fast-shifting blend of on-chain activity, news flow, and social sentiment, while labeled training data are scarce and expensive. In this paper, we present Meta-RL-Crypto, a unified transformer-based architecture that unifies meta-learning and reinforcement learning (RL) to create a fully self-improving trading agent. Starting from a vanilla instruction-tuned LLM, the agent iteratively alternates between three roles-actor, judge, and meta-judge-in a closed-loop architecture. This learning process requires no additional human supervision. It can leverage multimodal market inputs and internal preference feedback. The agent in the system continuously refines both the trading policy and evaluation criteria. Experiments across diverse market regimes demonstrate that Meta-RL-Crypto shows good performance on the technical indicators of the real market and outperforming other LLM-based baselines.
Related papers
- CryptoBench: A Dynamic Benchmark for Expert-Level Evaluation of LLM Agents in Cryptocurrency [60.83660377169452]
This paper introduces CryptoBench, the first expert-curated, dynamic benchmark designed to rigorously evaluate the real-world capabilities of Large Language Model (LLM) agents.<n>Unlike general-purpose agent benchmarks for search and prediction, professional crypto analysis presents specific challenges.
arXiv Detail & Related papers (2025-11-29T09:52:34Z) - Trade in Minutes! Rationality-Driven Agentic System for Quantitative Financial Trading [57.28635022507172]
TiMi is a rationality-driven multi-agent system that architecturally decouples strategy development from minute-level deployment.<n>We propose a two-tier analytical paradigm from macro patterns to micro customization, layered programming design for trading bot implementation, and closed-loop optimization driven by mathematical reflection.
arXiv Detail & Related papers (2025-10-06T13:08:55Z) - RAGEN: Understanding Self-Evolution in LLM Agents via Multi-Turn Reinforcement Learning [125.96848846966087]
Training large language models (LLMs) as interactive agents presents unique challenges.<n>While reinforcement learning has enabled progress in static tasks, multi-turn agent RL training remains underexplored.<n>We propose StarPO, a general framework for trajectory-level agent RL, and introduce RAGEN, a modular system for training and evaluating LLM agents.
arXiv Detail & Related papers (2025-04-24T17:57:08Z) - Agent Trading Arena: A Study on Numerical Understanding in LLM-Based Agents [69.58565132975504]
Large language models (LLMs) have demonstrated remarkable capabilities in natural language tasks.<n>We present the Agent Trading Arena, a virtual zero-sum stock market in which LLM-based agents engage in competitive multi-agent trading.
arXiv Detail & Related papers (2025-02-25T08:41:01Z) - Reinforcement Learning Pair Trading: A Dynamic Scaling approach [3.4698840925433774]
Trading cryptocurrency is difficult due to the inherent volatility of the crypto market.<n>This study investigates whether Reinforcement Learning can enhance decision-making in cryptocurrency algorithmic trading.
arXiv Detail & Related papers (2024-07-23T00:16:27Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - A Framework for Empowering Reinforcement Learning Agents with Causal Analysis: Enhancing Automated Cryptocurrency Trading [1.4356611205757077]
This research focuses on developing a reinforcement learning (RL) framework to tackle the complexities of trading five prominent cryptocurrencys: Coin, Litecoin, Ripple, and Tether.
We present the CausalReinforceNet(CRN) framework, which integrates both Bayesian and dynamic Bayesian network techniques to empower the RL agent in trade decision-making.
We develop two agents using the framework based on distinct RL algorithms to analyse performance compared to the Buy-and-Hold benchmark strategy and a baseline RL model.
arXiv Detail & Related papers (2023-10-14T01:08:52Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Bitcoin Transaction Strategy Construction Based on Deep Reinforcement
Learning [8.431365407963629]
This study proposes a framework for automatic high-frequency bitcoin transactions based on a deep reinforcement learning algorithm-proximal policy optimization (PPO)
The proposed framework can earn excess returns through both the period of volatility and surge, which opens the door to research on building a single cryptocurrency trading strategy based on deep learning.
arXiv Detail & Related papers (2021-09-30T01:24:03Z) - Deep Reinforcement Learning in Quantitative Algorithmic Trading: A
Review [0.0]
Deep Reinforcement Learning agents proved to be to a force to be reckon with in many games like Chess and Go.
This paper reviews the progress made so far with deep reinforcement learning in the subdomain of AI in finance.
We conclude that DRL in stock trading has showed huge applicability potential rivalling professional traders under strong assumptions.
arXiv Detail & Related papers (2021-05-31T22:26:43Z) - FinRL: A Deep Reinforcement Learning Library for Automated Stock Trading
in Quantitative Finance [20.43261517036651]
We introduce a DRL library FinRL that facilitates beginners to expose themselves to quantitative finance.
FinRL simulates trading environments across various stock markets, including NASDAQ-100, DJIA, S&P 500, HSI, SSE 50, and CSI 300.
It incorporates important trading constraints such as transaction cost, market liquidity and the investor's degree of risk-aversion.
arXiv Detail & Related papers (2020-11-19T01:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.