Aligning Language Models with Investor and Market Behavior for Financial Recommendations
- URL: http://arxiv.org/abs/2510.15993v1
- Date: Tue, 14 Oct 2025 03:24:20 GMT
- Title: Aligning Language Models with Investor and Market Behavior for Financial Recommendations
- Authors: Fernando Spadea, Oshani Seneviratne,
- Abstract summary: We present FLARKO, a novel framework that integrates Large Language Models (LLMs), Knowledge Graphs (KGs), and Kahneman-Tversky Optimization (KTO)<n>FLARKO encodes users' transaction histories and asset trends as structured KGs, providing interpretable and controllable context for the LLM.<n> evaluated on the FAR-Trans dataset, FLARKO consistently outperforms state-of-the-art recommendation baselines on behavioral alignment and joint profitability.
- Score: 46.90931293070464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most financial recommendation systems often fail to account for key behavioral and regulatory factors, leading to advice that is misaligned with user preferences, difficult to interpret, or unlikely to be followed. We present FLARKO (Financial Language-model for Asset Recommendation with Knowledge-graph Optimization), a novel framework that integrates Large Language Models (LLMs), Knowledge Graphs (KGs), and Kahneman-Tversky Optimization (KTO) to generate asset recommendations that are both profitable and behaviorally aligned. FLARKO encodes users' transaction histories and asset trends as structured KGs, providing interpretable and controllable context for the LLM. To demonstrate the adaptability of our approach, we develop and evaluate both a centralized architecture (CenFLARKO) and a federated variant (FedFLARKO). To our knowledge, this is the first demonstration of combining KTO for fine-tuning of LLMs for financial asset recommendation. We also present the first use of structured KGs to ground LLM reasoning over behavioral financial data in a federated learning (FL) setting. Evaluated on the FAR-Trans dataset, FLARKO consistently outperforms state-of-the-art recommendation baselines on behavioral alignment and joint profitability, while remaining interpretable and resource-efficient.
Related papers
- Parallel and Multi-Stage Knowledge Graph Retrieval for Behaviorally Aligned Financial Asset Recommendations [46.90931293070464]
This paper introduces RAG-FLARKO, a retrieval-augmented extension to FLARKO.<n>It overcomes scalability and relevance challenges using multi-stage and parallel KG retrieval processes.<n> Empirical evaluation on a real-world financial transaction dataset demonstrates that RAG-FLARKO significantly enhances recommendation quality.
arXiv Detail & Related papers (2025-10-08T20:42:53Z) - Revealing Potential Biases in LLM-Based Recommender Systems in the Cold Start Setting [41.964130989754516]
Large Language Models (LLMs) are increasingly used for recommendation tasks due to their general-purpose capabilities.<n>We introduce a benchmark specifically designed to evaluate fairness in zero-context recommendation.<n>Our modular pipeline supports recommendation domains and sensitive attributes, enabling systematic and flexible audits of any open-source LLM.
arXiv Detail & Related papers (2025-08-28T03:57:13Z) - LLM2Rec: Large Language Models Are Powerful Embedding Models for Sequential Recommendation [49.78419076215196]
Sequential recommendation aims to predict users' future interactions by modeling collaborative filtering (CF) signals from historical behaviors of similar users or items.<n>Traditional sequential recommenders rely on ID-based embeddings, which capture CF signals through high-order co-occurrence patterns.<n>Recent advances in large language models (LLMs) have motivated text-based recommendation approaches that derive item representations from textual descriptions.<n>We argue that an ideal embedding model should seamlessly integrate CF signals with rich semantic representations to improve both in-domain and out-of-domain recommendation performance.
arXiv Detail & Related papers (2025-06-16T13:27:06Z) - Training Large Recommendation Models via Graph-Language Token Alignment [53.3142545812349]
We propose a novel framework to train Large Recommendation models via Graph-Language Token Alignment.<n>By aligning item and user nodes from the interaction graph with pretrained LLM tokens, GLTA effectively leverages the reasoning abilities of LLMs.<n> Furthermore, we introduce Graph-Language Logits Matching (GLLM) to optimize token alignment for end-to-end item prediction.
arXiv Detail & Related papers (2025-02-26T02:19:10Z) - FLAG-Trader: Fusion LLM-Agent with Gradient-based Reinforcement Learning for Financial Trading [28.57263158928989]
Large language models (LLMs) fine-tuned on multimodal financial data have demonstrated impressive reasoning capabilities.<n>We propose textscFLAG-Trader, a unified architecture integrating linguistic processing (via LLMs) with gradient-driven reinforcement learning (RL) policy optimization.
arXiv Detail & Related papers (2025-02-17T04:45:53Z) - Large Language Model-Enhanced Symbolic Reasoning for Knowledge Base Completion [28.724919973497943]
Large language models (LLMs) and rule-based reasoning offer a powerful solution for improving the flexibility and reliability of Knowledge Base Completion.<n>We propose a novel framework consisting of a Subgraph Extractor, an LLM Proposer, and a Rule Reasoner.<n>Our approach offers several key benefits: the utilization of LLMs to enhance the richness and diversity of the proposed rules, and the integration with rule-based reasoning to improve reliability.
arXiv Detail & Related papers (2025-01-02T13:14:28Z) - LLM is Knowledge Graph Reasoner: LLM's Intuition-aware Knowledge Graph Reasoning for Cold-start Sequential Recommendation [47.34949656215159]
Large Language Models (LLMs) can be considered databases with a wealth of knowledge learned from the web data.<n>We propose a LLM's Intuition-aware Knowledge graph Reasoning model (LIKR)<n>Our model outperforms state-of-the-art recommendation methods in cold-start sequential recommendation scenarios.
arXiv Detail & Related papers (2024-12-17T01:52:15Z) - Reasoning over User Preferences: Knowledge Graph-Augmented LLMs for Explainable Conversational Recommendations [58.61021630938566]
Conversational Recommender Systems (CRSs) aim to provide personalized recommendations by capturing user preferences through interactive dialogues.<n>Current CRSs often leverage knowledge graphs (KGs) or language models to extract and represent user preferences as latent vectors, which limits their explainability.<n>We propose a plug-and-play framework that synergizes LLMs and KGs to reason over user preferences, enhancing the performance and explainability of existing CRSs.
arXiv Detail & Related papers (2024-11-16T11:47:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.