Parallel and Multi-Stage Knowledge Graph Retrieval for Behaviorally Aligned Financial Asset Recommendations
- URL: http://arxiv.org/abs/2511.11583v1
- Date: Wed, 08 Oct 2025 20:42:53 GMT
- Title: Parallel and Multi-Stage Knowledge Graph Retrieval for Behaviorally Aligned Financial Asset Recommendations
- Authors: Fernando Spadea, Oshani Seneviratne,
- Abstract summary: This paper introduces RAG-FLARKO, a retrieval-augmented extension to FLARKO.<n>It overcomes scalability and relevance challenges using multi-stage and parallel KG retrieval processes.<n> Empirical evaluation on a real-world financial transaction dataset demonstrates that RAG-FLARKO significantly enhances recommendation quality.
- Score: 46.90931293070464
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) show promise for personalized financial recommendations but are hampered by context limits, hallucinations, and a lack of behavioral grounding. Our prior work, FLARKO, embedded structured knowledge graphs (KGs) in LLM prompts to align advice with user behavior and market data. This paper introduces RAG-FLARKO, a retrieval-augmented extension to FLARKO, that overcomes scalability and relevance challenges using multi-stage and parallel KG retrieval processes. Our method first retrieves behaviorally relevant entities from a user's transaction KG and then uses this context to filter temporally consistent signals from a market KG, constructing a compact, grounded subgraph for the LLM. This pipeline reduces context overhead and sharpens the model's focus on relevant information. Empirical evaluation on a real-world financial transaction dataset demonstrates that RAG-FLARKO significantly enhances recommendation quality. Notably, our framework enables smaller, more efficient models to achieve high performance in both profitability and behavioral alignment, presenting a viable path for deploying grounded financial AI in resource-constrained environments.
Related papers
- Enhancing Business Analytics through Hybrid Summarization of Financial Reports [0.152292571922932]
Financial reports and earnings communications contain large volumes of structured and semi structured information.<n>We present a hybrid summarization framework that combines extractive and abstractive techniques to produce concise and factually reliable summaries.<n>These findings support the development of practical summarization systems for distilling lengthy financial texts into usable business insights.
arXiv Detail & Related papers (2025-12-28T16:25:12Z) - Enhancing Foundation Models in Transaction Understanding with LLM-based Sentence Embeddings [26.118375969968437]
Large Language Models (LLMs) can address this limitation through superior semantic understanding.<n>We introduce a hybrid framework that uses LLM-generated embeddings as semantic initializations for lightweight transaction models.<n>Our approach employs multi-source data fusion to enrich merchant categorical fields and a one-word constraint principle for consistent embedding generation.
arXiv Detail & Related papers (2025-12-01T23:30:17Z) - Metadata-Driven Retrieval-Augmented Generation for Financial Question Answering [0.0]
We introduce a sophisticated indexing pipeline to create contextually rich document chunks.<n>We benchmark a spectrum of enhancements, including pre-retrieval filtering, post-retrieval reranking, and enriched embeddings.<n>Our proposed optimal architecture combines LLM-driven pre-retrieval optimizations with these contextual embeddings to achieve superior performance.
arXiv Detail & Related papers (2025-10-28T13:16:36Z) - Can Knowledge-Graph-based Retrieval Augmented Generation Really Retrieve What You Need? [57.28763506780752]
GraphFlow is a framework that efficiently retrieves accurate and diverse knowledge required for real-world queries from text-rich KGs.<n>It outperforms strong KG-RAG baselines, including GPT-4o, by 10% on average in hit rate and recall.<n>It also shows strong generalization to unseen KGs, demonstrating its effectiveness and robustness.
arXiv Detail & Related papers (2025-10-18T17:06:49Z) - Aligning Language Models with Investor and Market Behavior for Financial Recommendations [46.90931293070464]
We present FLARKO, a novel framework that integrates Large Language Models (LLMs), Knowledge Graphs (KGs), and Kahneman-Tversky Optimization (KTO)<n>FLARKO encodes users' transaction histories and asset trends as structured KGs, providing interpretable and controllable context for the LLM.<n> evaluated on the FAR-Trans dataset, FLARKO consistently outperforms state-of-the-art recommendation baselines on behavioral alignment and joint profitability.
arXiv Detail & Related papers (2025-10-14T03:24:20Z) - GRIL: Knowledge Graph Retrieval-Integrated Learning with Large Language Models [59.72897499248909]
We propose a novel graph retriever trained end-to-end with Large Language Models (LLMs)<n>Within the extracted subgraph, structural knowledge and semantic features are encoded via soft tokens and the verbalized graph, respectively, which are infused into the LLM together.<n>Our approach consistently achieves state-of-the-art performance, validating the strength of joint graph-LLM optimization for complex reasoning tasks.
arXiv Detail & Related papers (2025-09-20T02:38:00Z) - Tuning-Free LLM Can Build A Strong Recommender Under Sparse Connectivity And Knowledge Gap Via Extracting Intent [6.6404452803956495]
We present IKGR, a novel framework that constructs an intent-centric knowledge graph.<n>IKGR canonically represents what a user seeks and what an item satisfies as first-class entities.<n>Experiments on public and enterprise datasets demonstrate that IKGR consistently outperforms strong baselines.
arXiv Detail & Related papers (2025-05-16T06:07:19Z) - Training Large Recommendation Models via Graph-Language Token Alignment [53.3142545812349]
We propose a novel framework to train Large Recommendation models via Graph-Language Token Alignment.<n>By aligning item and user nodes from the interaction graph with pretrained LLM tokens, GLTA effectively leverages the reasoning abilities of LLMs.<n> Furthermore, we introduce Graph-Language Logits Matching (GLLM) to optimize token alignment for end-to-end item prediction.
arXiv Detail & Related papers (2025-02-26T02:19:10Z) - FRAG: A Flexible Modular Framework for Retrieval-Augmented Generation based on Knowledge Graphs [17.477161619378332]
We propose a novel flexible modular KG-RAG framework, termed FRAG, which synergizes the advantages of both approaches.<n>By using the query text instead of the Knowledge Graph, FRAG improves retrieval quality while maintaining flexibility.
arXiv Detail & Related papers (2025-01-17T05:19:14Z) - LLM is Knowledge Graph Reasoner: LLM's Intuition-aware Knowledge Graph Reasoning for Cold-start Sequential Recommendation [47.34949656215159]
Large Language Models (LLMs) can be considered databases with a wealth of knowledge learned from the web data.<n>We propose a LLM's Intuition-aware Knowledge graph Reasoning model (LIKR)<n>Our model outperforms state-of-the-art recommendation methods in cold-start sequential recommendation scenarios.
arXiv Detail & Related papers (2024-12-17T01:52:15Z) - Bridging LLMs and KGs without Fine-Tuning: Intermediate Probing Meets Subgraph-Aware Entity Descriptions [49.36683223327633]
Large Language Models (LLMs) encapsulate extensive world knowledge and exhibit powerful context modeling capabilities.<n>We propose a novel framework that synergizes the strengths of LLMs with robust knowledge representation to enable effective and efficient KGC.<n>We achieve a 47% relative improvement over previous methods based on non-fine-tuned LLMs and, to our knowledge, are the first to achieve classification performance comparable to fine-tuned LLMs.
arXiv Detail & Related papers (2024-08-13T10:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.