Deep Pairwise Learning To Rank For Search Autocomplete
- URL: http://arxiv.org/abs/2108.04976v1
- Date: Wed, 11 Aug 2021 00:33:18 GMT
- Title: Deep Pairwise Learning To Rank For Search Autocomplete
- Authors: Kai Yuan, Da Kuang
- Abstract summary: We propose a novel context-aware neural network based pairwise ranker to improve Autocomplete ranking.
Compared to Lambda ranker, DeepPLTR shows +3.90% MeanReciprocalRank (MRR) lift in offline evaluation, and yielded +0.06% (p 0.1) Gross Merchandise Value (GMV) lift in an Amazon's online A/B experiment.
- Score: 4.8371021653616975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autocomplete (a.k.a "Query Auto-Completion", "AC") suggests full queries
based on a prefix typed by customer. Autocomplete has been a core feature of
commercial search engine. In this paper, we propose a novel context-aware
neural network based pairwise ranker (DeepPLTR) to improve AC ranking, DeepPLTR
leverages contextual and behavioral features to rank queries by minimizing a
pairwise loss, based on a fully-connected neural network structure. Compared to
LambdaMART ranker, DeepPLTR shows +3.90% MeanReciprocalRank (MRR) lift in
offline evaluation, and yielded +0.06% (p < 0.1) Gross Merchandise Value (GMV)
lift in an Amazon's online A/B experiment.
Related papers
- LINKAGE: Listwise Ranking among Varied-Quality References for Non-Factoid QA Evaluation via LLMs [61.57691505683534]
Non-Factoid (NF) Question Answering (QA) is challenging to evaluate due to diverse potential answers and no objective criterion.
Large Language Models (LLMs) have been resorted to for NFQA evaluation due to their compelling performance on various NLP tasks.
We propose a novel listwise NFQA evaluation approach, that utilizes LLMs to rank candidate answers in a list of reference answers sorted by descending quality.
arXiv Detail & Related papers (2024-09-23T06:42:21Z) - Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - Sequential Decision-Making for Inline Text Autocomplete [14.83046358936405]
We study the problem of improving inline autocomplete suggestions in text entry systems.
We use reinforcement learning to learn suggestion policies through repeated interactions with a target user.
arXiv Detail & Related papers (2024-03-21T22:33:16Z) - Seasonality Based Reranking of E-commerce Autocomplete Using Natural
Language Queries [15.37457156804212]
Query autocomplete (QAC) also known as typeahead, suggests list of complete queries as user types prefix in the search box.
One of the goals of typeahead is to suggest relevant queries to users which are seasonally important.
We propose a neural network based natural language processing (NLP) algorithm to incorporate seasonality as a signal.
arXiv Detail & Related papers (2023-08-03T21:14:25Z) - Unified Functional Hashing in Automatic Machine Learning [58.77232199682271]
We show that large efficiency gains can be obtained by employing a fast unified functional hash.
Our hash is "functional" in that it identifies equivalent candidates even if they were represented or coded differently.
We show dramatic improvements on multiple AutoML domains, including neural architecture search and algorithm discovery.
arXiv Detail & Related papers (2023-02-10T18:50:37Z) - RankDNN: Learning to Rank for Few-shot Learning [70.49494297554537]
This paper introduces a new few-shot learning pipeline that casts relevance ranking for image retrieval as binary ranking relation classification.
It provides a new perspective on few-shot learning and is complementary to state-of-the-art methods.
arXiv Detail & Related papers (2022-11-28T13:59:31Z) - Open-Set Automatic Target Recognition [52.27048031302509]
Automatic Target Recognition (ATR) is a category of computer vision algorithms which attempts to recognize targets on data obtained from different sensors.
Existing ATR algorithms are developed for traditional closed-set methods where training and testing have the same class distribution.
We propose an Open-set Automatic Target Recognition framework where we enable open-set recognition capability for ATR algorithms.
arXiv Detail & Related papers (2022-11-10T21:28:24Z) - DAAS: Differentiable Architecture and Augmentation Policy Search [107.53318939844422]
This work considers the possible coupling between neural architectures and data augmentation and proposes an effective algorithm jointly searching for them.
Our approach achieves 97.91% accuracy on CIFAR-10 and 76.6% Top-1 accuracy on ImageNet dataset, showing the outstanding performance of our search algorithm.
arXiv Detail & Related papers (2021-09-30T17:15:17Z) - APRF-Net: Attentive Pseudo-Relevance Feedback Network for Query
Categorization [12.634704014206294]
We propose a novel deep neural model named textbfAttentive textbfPseudo textbfRelevance textbfFeedback textbfNetwork (APRF-Net) to enhance the representation of rare queries for query categorization.
Our results show that the APRF-Net significantly improves query categorization by 5.9% on $F1@1$ score over the baselines, which increases to 8.2% improvement for the rare queries.
arXiv Detail & Related papers (2021-04-23T02:34:08Z) - AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in
Fine-tuning of Deep Networks [13.761920032156082]
Existing fine-tuning methods use a single learning rate over all layers.
We propose an algorithm that improves fine-tuning performance and reduces network complexity.
arXiv Detail & Related papers (2020-02-14T14:24:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.