High Quality Related Search Query Suggestions using Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2108.04452v1
- Date: Tue, 10 Aug 2021 05:22:32 GMT
- Title: High Quality Related Search Query Suggestions using Deep Reinforcement
Learning
- Authors: Praveen Kumar Bodigutla
- Abstract summary: "High Quality Related Search Query Suggestions" task aims at recommending search queries which are real, accurate, diverse, relevant and engaging.
We train a Deep Reinforcement Learning model to predict the query a user would enter next.
The reward signal is composed of long-term session-based user feedback, syntactic relatedness and estimated naturalness of generated query.
- Score: 0.15229257192293202
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: "High Quality Related Search Query Suggestions" task aims at recommending
search queries which are real, accurate, diverse, relevant and engaging.
Obtaining large amounts of query-quality human annotations is expensive. Prior
work on supervised query suggestion models suffered from selection and exposure
bias, and relied on sparse and noisy immediate user-feedback (e.g., clicks),
leading to low quality suggestions. Reinforcement Learning techniques employed
to reformulate a query using terms from search results, have limited
scalability to large-scale industry applications. To recommend high quality
related search queries, we train a Deep Reinforcement Learning model to predict
the query a user would enter next. The reward signal is composed of long-term
session-based user feedback, syntactic relatedness and estimated naturalness of
generated query. Over the baseline supervised model, our proposed approach
achieves a significant relative improvement in terms of recommendation
diversity (3%), down-stream user-engagement (4.2%) and per-sentence word
repetitions (82%).
Related papers
- Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.
Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.
Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - Multimodal Reranking for Knowledge-Intensive Visual Question Answering [77.24401833951096]
We introduce a multi-modal reranker to improve the ranking quality of knowledge candidates for answer generation.
Experiments on OK-VQA and A-OKVQA show that multi-modal reranker from distant supervision provides consistent improvements.
arXiv Detail & Related papers (2024-07-17T02:58:52Z) - CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - Learning to Retrieve for Job Matching [22.007634436648427]
We discuss applying learning-to-retrieve technology to enhance LinkedIns job search and recommendation systems.
We leverage confirmed hire data to construct a graph that evaluates a seeker's qualification for a job, and utilize learned links for retrieval.
In addition to a solution based on a conventional inverted index, we developed an on-GPU solution capable of supporting both KNN and term matching efficiently.
arXiv Detail & Related papers (2024-02-21T00:05:25Z) - A Deep Reinforcement Learning Approach for Interactive Search with
Sentence-level Feedback [12.712416630402119]
Interactive search can provide a better experience by incorporating interaction feedback from the users.
Existing state-of-the-art (SOTA) systems use reinforcement learning (RL) models to incorporate the interactions.
Yet such feedback requires extensive RL action space exploration and large amounts of annotated data.
This work proposes a new deep Q-learning (DQ) approach, DQrank.
arXiv Detail & Related papers (2023-10-03T18:45:21Z) - Beyond Semantics: Learning a Behavior Augmented Relevance Model with
Self-supervised Learning [25.356999988217325]
Relevance modeling aims to locate desirable items for corresponding queries.
auxiliary query-item interactions extracted from user historical behavior data could provide hints to reveal users' search intents further.
Our model builds multi-level co-attention for distilling coarse-grained and fine-grained semantic representations from both neighbor and target views.
arXiv Detail & Related papers (2023-08-10T06:52:53Z) - Improving Sequential Query Recommendation with Immediate User Feedback [6.925738064847176]
We propose an algorithm for next query recommendation in interactive data exploration settings.
We conduct a large-scale experimental study using log files from a popular online literature discovery service.
arXiv Detail & Related papers (2022-05-12T18:19:24Z) - Counterfactual Learning To Rank for Utility-Maximizing Query
Autocompletion [40.31426350180036]
We propose a new approach that explicitly optimize the query suggestions for downstream retrieval performance.
We formulate this as a problem of ranking a set of rankings, where each query suggestion is represented by the downstream item ranking it produces.
We then present a learning method that ranks query suggestions by the quality of their item rankings.
arXiv Detail & Related papers (2022-04-22T21:40:51Z) - Choosing the Best of Both Worlds: Diverse and Novel Recommendations
through Multi-Objective Reinforcement Learning [68.45370492516531]
We introduce Scalarized Multi-Objective Reinforcement Learning (SMORL) for the Recommender Systems (RS) setting.
SMORL agent augments standard recommendation models with additional RL layers that enforce it to simultaneously satisfy three principal objectives: accuracy, diversity, and novelty of recommendations.
Our experimental results on two real-world datasets reveal a substantial increase in aggregate diversity, a moderate increase in accuracy, reduced repetitiveness of recommendations, and demonstrate the importance of reinforcing diversity and novelty as complementary objectives.
arXiv Detail & Related papers (2021-10-28T13:22:45Z) - Information Directed Reward Learning for Reinforcement Learning [64.33774245655401]
We learn a model of the reward function that allows standard RL algorithms to achieve high expected return with as few expert queries as possible.
In contrast to prior active reward learning methods designed for specific types of queries, IDRL naturally accommodates different query types.
We support our findings with extensive evaluations in multiple environments and with different types of queries.
arXiv Detail & Related papers (2021-02-24T18:46:42Z) - Session-Aware Query Auto-completion using Extreme Multi-label Ranking [61.753713147852125]
We take the novel approach of modeling session-aware query auto-completion as an e Multi-Xtreme Ranking (XMR) problem.
We adapt a popular XMR algorithm for this purpose by proposing several modifications to the key steps in the algorithm.
Our approach meets the stringent latency requirements for auto-complete systems while leveraging session information in making suggestions.
arXiv Detail & Related papers (2020-12-09T17:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.