Learn to Rank Risky Investors: A Case Study of Predicting Retail Traders' Behaviour and Profitability
- URL: http://arxiv.org/abs/2509.16616v1
- Date: Sat, 20 Sep 2025 10:41:13 GMT
- Title: Learn to Rank Risky Investors: A Case Study of Predicting Retail Traders' Behaviour and Profitability
- Authors: Weixian Waylon Li, Tiejun Ma,
- Abstract summary: We propose a profit-aware risk ranker (PA-RiskRanker) that reframes the problem of identifying risky traders as a ranking task.<n>Our approach features a Profit-Aware binary cross entropy (PA-BCE) loss function and a transformer-based ranker enhanced with a self-cross-trader attention pipeline.<n>Our research critically examines the limitations of existing deep learning-based LETOR algorithms in trading risk management.
- Score: 3.731289189298451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Identifying risky traders with high profits in financial markets is crucial for market makers, such as trading exchanges, to ensure effective risk management through real-time decisions on regulation compliance and hedging. However, capturing the complex and dynamic behaviours of individual traders poses significant challenges. Traditional classification and anomaly detection methods often establish a fixed risk boundary, failing to account for this complexity and dynamism. To tackle this issue, we propose a profit-aware risk ranker (PA-RiskRanker) that reframes the problem of identifying risky traders as a ranking task using Learning-to-Rank (LETOR) algorithms. Our approach features a Profit-Aware binary cross entropy (PA-BCE) loss function and a transformer-based ranker enhanced with a self-cross-trader attention pipeline. These components effectively integrate profit and loss (P&L) considerations into the training process while capturing intra- and inter-trader relationships. Our research critically examines the limitations of existing deep learning-based LETOR algorithms in trading risk management, which often overlook the importance of P&L in financial scenarios. By prioritising P&L, our method improves risky trader identification, achieving an 8.4% increase in F1 score compared to state-of-the-art (SOTA) ranking models like Rankformer. Additionally, it demonstrates a 10%-17% increase in average profit compared to all benchmark models.
Related papers
- FineFT: Efficient and Risk-Aware Ensemble Reinforcement Learning for Futures Trading [39.845446417892525]
The Efficient and Risk-Aware Ensemble Reinforcement Learning for Futures Trading (FineFT) is a novel ensemble framework with stable training and proper risk management.<n>We show FineFT outperforms 12 SOTA baselines in 6 financial metrics, reducing risk by more than 40% while achieving superior profitability compared to the runner-up.
arXiv Detail & Related papers (2025-12-29T11:56:33Z) - Robust Reinforcement Learning in Finance: Modeling Market Impact with Elliptic Uncertainty Sets [57.179679246370114]
In financial applications, reinforcement learning (RL) agents are commonly trained on historical data, where their actions do not influence prices.<n>During deployment, these agents trade in live markets where their own transactions can shift asset prices, a phenomenon known as market impact.<n>Traditional robust RL approaches address this model misspecification by optimizing the worst-case performance over a set of uncertainties.<n>We develop a novel class of elliptic uncertainty sets, enabling efficient and tractable robust policy evaluation.
arXiv Detail & Related papers (2025-10-22T18:22:25Z) - Trade in Minutes! Rationality-Driven Agentic System for Quantitative Financial Trading [57.28635022507172]
TiMi is a rationality-driven multi-agent system that architecturally decouples strategy development from minute-level deployment.<n>We propose a two-tier analytical paradigm from macro patterns to micro customization, layered programming design for trading bot implementation, and closed-loop optimization driven by mathematical reflection.
arXiv Detail & Related papers (2025-10-06T13:08:55Z) - To Trade or Not to Trade: An Agentic Approach to Estimating Market Risk Improves Trading Decisions [0.0]
Large language models (LLMs) are increasingly deployed in agentic frameworks.<n>We develop an agentic system that uses LLMs to iteratively discover differential equations for financial time series.<n>We find that model-informed trading strategies outperform standard LLM-based agents.
arXiv Detail & Related papers (2025-07-11T13:29:32Z) - Dynamic Reinsurance Treaty Bidding via Multi-Agent Reinforcement Learning [0.0]
This paper develops a novel multi-agent reinforcement learning (MARL) framework for reinsurance treaty bidding.<n>MARL agents achieve up to 15% higher underwriting profit, 20% lower tail risk, and over 25% improvement in Sharpe ratios.<n>These findings suggest that MARL offers a viable path toward more transparent, adaptive, and risk-sensitive reinsurance markets.
arXiv Detail & Related papers (2025-06-16T05:43:22Z) - Risk-averse policies for natural gas futures trading using distributional reinforcement learning [0.0]
This paper studies the effectiveness of three distributional RL algorithms for natural gas futures trading.<n>To the best of our knowledge, these algorithms have never been applied in a trading context.<n>We show that training C51 and IQN to maximize CVaR produces risk-sensitive policies with adjustable risk aversion.
arXiv Detail & Related papers (2025-01-08T11:11:25Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - Harnessing Deep Q-Learning for Enhanced Statistical Arbitrage in
High-Frequency Trading: A Comprehensive Exploration [0.0]
Reinforcement Learning (RL) is a branch of machine learning where agents learn by interacting with their environment.
This paper dives deep into the integration of RL in statistical arbitrage strategies tailored for High-Frequency Trading (HFT) scenarios.
Through extensive simulations and backtests, our research reveals that RL not only enhances the adaptability of trading strategies but also shows promise in improving profitability metrics and risk-adjusted returns.
arXiv Detail & Related papers (2023-09-13T06:15:40Z) - Safe Deployment for Counterfactual Learning to Rank with Exposure-Based
Risk Minimization [63.93275508300137]
We introduce a novel risk-aware Counterfactual Learning To Rank method with theoretical guarantees for safe deployment.
Our experimental results demonstrate the efficacy of our proposed method, which is effective at avoiding initial periods of bad performance when little data is available.
arXiv Detail & Related papers (2023-04-26T15:54:23Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Robust Risk-Sensitive Reinforcement Learning Agents for Trading Markets [23.224860573461818]
Trading markets represent a real-world financial application to deploy reinforcement learning agents.
Our work is the first one extending empirical game theory analysis for multi-agent learning by considering risk-sensitive payoffs.
arXiv Detail & Related papers (2021-07-16T19:15:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.