Efficient and Effective Query Context-Aware Learning-to-Rank Model for Sequential Recommendation
- URL: http://arxiv.org/abs/2507.03789v2
- Date: Tue, 12 Aug 2025 15:38:44 GMT
- Title: Efficient and Effective Query Context-Aware Learning-to-Rank Model for Sequential Recommendation
- Authors: Andrii Dzhoha, Alisa Mironenko, Evgeny Labzin, Vladimir Vlasov, Maarten Versteegh, Marjan Celikik,
- Abstract summary: This paper analyzes incorporating different strategies for query context into transformers trained with a causal language modeling procedure.<n>We propose a new method that effectively fuses the item sequence with query context within the attention mechanism.
- Score: 0.02638878351659022
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern sequential recommender systems commonly use transformer-based models for next-item prediction. While these models demonstrate a strong balance between efficiency and quality, integrating interleaving features - such as the query context (e.g., browse category) under which next-item interactions occur - poses challenges. Effectively capturing query context is crucial for refining ranking relevance and enhancing user engagement, as it provides valuable signals about user intent within a session. Unlike item features, historical query context is typically not aligned with item sequences and may be unavailable at inference due to privacy constraints or feature store limitations - making its integration into transformers both challenging and error-prone. This paper analyzes different strategies for incorporating query context into transformers trained with a causal language modeling procedure as a case study. We propose a new method that effectively fuses the item sequence with query context within the attention mechanism. Through extensive offline and online experiments on a large-scale online platform and open datasets, we present evidence that our proposed method is an effective approach for integrating query context to improve model ranking quality in terms of relevance and diversity.
Related papers
- Towards Context-aware Reasoning-enhanced Generative Searching in E-commerce [61.03081096959132]
We propose a context-aware reasoning-enhanced generative search framework for better textbfunderstanding the complicated context.<n>Our approach achieves superior performance compared with strong baselines, validating its effectiveness for search-based recommendation.
arXiv Detail & Related papers (2025-10-19T16:46:11Z) - Influence Guided Context Selection for Effective Retrieval-Augmented Generation [23.188397777606095]
Retrieval-Augmented Generation (RAG) addresses large language model (LLM) hallucinations by grounding responses in external knowledge.<n>Existing approaches attempt to improve performance through context selection based on predefined context quality assessment metrics.<n>We reconceptualize context quality assessment as an inference-time data valuation problem and introduce the Contextual Influence Value (CI value)<n>This novel metric quantifies context quality by measuring the performance degradation when removing each context from the list.
arXiv Detail & Related papers (2025-09-21T07:19:09Z) - Few-Shot Query Intent Detection via Relation-Aware Prompt Learning [14.048513219736543]
We propose a novel framework that integrates both textual and relational structure information for model pretraining.<n>Building on this framework, we propose a novel mechanism, the query-adaptive attention network (QueryAdapt), which operates at the relation token level by generating intent-specific relation tokens.
arXiv Detail & Related papers (2025-09-06T07:41:47Z) - Test-Time Scaling Strategies for Generative Retrieval in Multimodal Conversational Recommendations [70.94563079082751]
E-commerce has exposed the limitations of traditional product retrieval systems in managing complex, multi-turn user interactions.<n>We propose a novel framework that introduces test-time scaling into conversational multimodal product retrieval.<n>Our approach builds on a generative retriever, further augmented with a test-time reranking mechanism that improves retrieval accuracy and better aligns results with evolving user intent throughout the dialogue.
arXiv Detail & Related papers (2025-08-25T15:38:56Z) - ConvMix: A Mixed-Criteria Data Augmentation Framework for Conversational Dense Retrieval [25.129468117978767]
We propose ConvMix, a mixed-criteria framework to augment conversational dense retrieval.<n>We design a two-sided relevance judgment augmentation schema in a scalable manner via the aid of large language models.<n> Experimental results on five widely used benchmarks show that the conversational dense retriever trained by our ConvMix framework outperforms previous baseline methods.
arXiv Detail & Related papers (2025-08-06T01:28:49Z) - AI Guided Accelerator For Search Experience [4.832123045961485]
We propose a novel framework that explicitly models transitional queries--intermediate reformulations occurring during the user's journey toward their final purchase intent.<n>This approach allows us to model a user's shopping funnel, where mid-journey transitions reflect exploratory behavior and intent refinement.<n>Our contributions include (i) the formal identification and modeling of transitional queries, (ii) the introduction of a structured query sequence mining pipeline for intent flow understanding, and (iii) the application of LLMs for scalable, intent-aware query expansion.
arXiv Detail & Related papers (2025-07-25T23:26:00Z) - The Devil is in the Spurious Correlations: Boosting Moment Retrieval with Dynamic Learning [49.40254251698784]
We propose a dynamic learning approach for moment retrieval, where two strategies are designed to mitigate the spurious correlation.<n>First, we introduce a novel video synthesis approach to construct a dynamic context for the queried moment.<n>Second, to alleviate the over-association with backgrounds, we enhance representations temporally by incorporating text-dynamics interaction.
arXiv Detail & Related papers (2025-01-13T13:13:06Z) - Pointwise Mutual Information as a Performance Gauge for Retrieval-Augmented Generation [78.28197013467157]
We show that the pointwise mutual information between a context and a question is an effective gauge for language model performance.<n>We propose two methods that use the pointwise mutual information between a document and a question as a gauge for selecting and constructing prompts that lead to better performance.
arXiv Detail & Related papers (2024-11-12T13:14:09Z) - Improving Retrieval in Sponsored Search by Leveraging Query Context Signals [6.152499434499752]
We propose an approach to enhance query understanding by augmenting queries with rich contextual signals.
We use web search titles and snippets to ground queries in real-world information and utilize GPT-4 to generate query rewrites and explanations.
Our context-aware approach substantially outperforms context-free models.
arXiv Detail & Related papers (2024-07-19T14:28:53Z) - Query-oriented Data Augmentation for Session Search [71.84678750612754]
We propose query-oriented data augmentation to enrich search logs and empower the modeling.
We generate supplemental training pairs by altering the most important part of a search context.
We develop several strategies to alter the current query, resulting in new training data with varying degrees of difficulty.
arXiv Detail & Related papers (2024-07-04T08:08:33Z) - CART: A Generative Cross-Modal Retrieval Framework with Coarse-To-Fine Semantic Modeling [53.97609687516371]
Cross-modal retrieval aims to search for instances, which are semantically related to the query through the interaction of different modal data.<n>Traditional solutions utilize a single-tower or dual-tower framework to explicitly compute the score between queries and candidates.<n>We propose a generative cross-modal retrieval framework (CART) based on coarse-to-fine semantic modeling.
arXiv Detail & Related papers (2024-06-25T12:47:04Z) - CELA: Cost-Efficient Language Model Alignment for CTR Prediction [70.65910069412944]
Click-Through Rate (CTR) prediction holds a paramount position in recommender systems.<n>Recent efforts have sought to mitigate these challenges by integrating Pre-trained Language Models (PLMs)<n>We propose textbfCost-textbfEfficient textbfLanguage Model textbfAlignment (textbfCELA) for CTR prediction.
arXiv Detail & Related papers (2024-05-17T07:43:25Z) - Semantic Equivalence of e-Commerce Queries [6.232692545488813]
This paper introduces a framework to recognize and leverage query equivalence to enhance searcher and business outcomes.
The proposed approach addresses three key problems: mapping queries to vector representations of search intent, identifying nearest neighbor queries expressing equivalent or similar intent, and optimizing for user or business objectives.
arXiv Detail & Related papers (2023-08-07T18:40:13Z) - FineDiving: A Fine-grained Dataset for Procedure-aware Action Quality
Assessment [93.09267863425492]
We argue that understanding both high-level semantics and internal temporal structures of actions in competitive sports videos is the key to making predictions accurate and interpretable.
We construct a new fine-grained dataset, called FineDiving, developed on diverse diving events with detailed annotations on action procedures.
arXiv Detail & Related papers (2022-04-07T17:59:32Z) - Utterance Rewriting with Contrastive Learning in Multi-turn Dialogue [22.103162555263143]
We introduce contrastive learning and multi-task learning to jointly model the problem.
Our proposed model achieves state-of-the-art performance on several public datasets.
arXiv Detail & Related papers (2022-03-22T10:13:27Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Improving Attention Mechanism with Query-Value Interaction [92.67156911466397]
We propose a query-value interaction function which can learn query-aware attention values.
Our approach can consistently improve the performance of many attention-based models.
arXiv Detail & Related papers (2020-10-08T05:12:52Z) - Learning an Effective Context-Response Matching Model with
Self-Supervised Tasks for Retrieval-based Dialogues [88.73739515457116]
We introduce four self-supervised tasks including next session prediction, utterance restoration, incoherence detection and consistency discrimination.
We jointly train the PLM-based response selection model with these auxiliary tasks in a multi-task manner.
Experiment results indicate that the proposed auxiliary self-supervised tasks bring significant improvement for multi-turn response selection.
arXiv Detail & Related papers (2020-09-14T08:44:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.