Exploring Query Understanding for Amazon Product Search
- URL: http://arxiv.org/abs/2408.02215v1
- Date: Mon, 05 Aug 2024 03:33:11 GMT
- Title: Exploring Query Understanding for Amazon Product Search
- Authors: Chen Luo, Xianfeng Tang, Hanqing Lu, Yaochen Xie, Hui Liu, Zhenwei Dai, Limeng Cui, Ashutosh Joshi, Sreyashi Nag, Yang Li, Zhen Li, Rahul Goutam, Jiliang Tang, Haiyang Zhang, Qi He,
- Abstract summary: We study how query understanding-based ranking features influence the ranking process.
We propose a query understanding-based multi-task learning framework for ranking.
We present our studies and investigations using the real-world system on Amazon Search.
- Score: 62.53282527112405
- License:
- Abstract: Online shopping platforms, such as Amazon, offer services to billions of people worldwide. Unlike web search or other search engines, product search engines have their unique characteristics, primarily featuring short queries which are mostly a combination of product attributes and structured product search space. The uniqueness of product search underscores the crucial importance of the query understanding component. However, there are limited studies focusing on exploring this impact within real-world product search engines. In this work, we aim to bridge this gap by conducting a comprehensive study and sharing our year-long journey investigating how the query understanding service impacts Amazon Product Search. Firstly, we explore how query understanding-based ranking features influence the ranking process. Next, we delve into how the query understanding system contributes to understanding the performance of a ranking model. Building on the insights gained from our study on the evaluation of the query understanding-based ranking model, we propose a query understanding-based multi-task learning framework for ranking. We present our studies and investigations using the real-world system on Amazon Search.
Related papers
- When Search Engine Services meet Large Language Models: Visions and Challenges [53.32948540004658]
This paper conducts an in-depth examination of how integrating Large Language Models with search engines can mutually benefit both technologies.
We focus on two main areas: using search engines to improve LLMs (Search4LLM) and enhancing search engine functions using LLMs (LLM4Search)
arXiv Detail & Related papers (2024-06-28T03:52:13Z) - OmniSearchSage: Multi-Task Multi-Entity Embeddings for Pinterest Search [2.917688415599187]
We present OmniSearchSage, a versatile and scalable system for understanding search queries, pins, and products for Pinterest search.
We jointly learn a unified query embedding coupled with pin and product embeddings, leading to an improvement of $>8%$ relevance, $>7%$ engagement, and $>5%$ ads CTR.
arXiv Detail & Related papers (2024-04-25T00:10:25Z) - STaRK: Benchmarking LLM Retrieval on Textual and Relational Knowledge Bases [93.96463520716759]
We develop STARK, a large-scale Semi-structure retrieval benchmark on Textual and Knowledge Bases.
Our benchmark covers three domains: product search, academic paper search, and queries in precision medicine.
We design a novel pipeline to synthesize realistic user queries that integrate diverse relational information and complex textual properties.
arXiv Detail & Related papers (2024-04-19T22:54:54Z) - Que2Engage: Embedding-based Retrieval for Relevant and Engaging Products
at Facebook Marketplace [15.054431410052851]
We present Que2Engage, a search EBR system built towards bridging the gap between retrieval and ranking for end-to-end optimizations.
We show the effectiveness of our approach via a multitask evaluation framework and thorough baseline comparisons and ablation studies.
arXiv Detail & Related papers (2023-02-21T23:10:16Z) - Graph Enhanced BERT for Query Understanding [55.90334539898102]
query understanding plays a key role in exploring users' search intents and facilitating users to locate their most desired information.
In recent years, pre-trained language models (PLMs) have advanced various natural language processing tasks.
We propose a novel graph-enhanced pre-training framework, GE-BERT, which can leverage both query content and the query graph.
arXiv Detail & Related papers (2022-04-03T16:50:30Z) - Exposing Query Identification for Search Transparency [69.06545074617685]
We explore the feasibility of approximate exposing query identification (EQI) as a retrieval task by reversing the role of queries and documents in two classes of search systems.
We derive an evaluation metric to measure the quality of a ranking of exposing queries, as well as conducting an empirical analysis focusing on various practical aspects of approximate EQI.
arXiv Detail & Related papers (2021-10-14T20:19:27Z) - Online Learning of Optimally Diverse Rankings [63.62764375279861]
We propose an algorithm that efficiently learns the optimal list based on users' feedback only.
We show that after $T$ queries, the regret of LDR scales as $O((N-L)log(T))$ where $N$ is the number of all items.
arXiv Detail & Related papers (2021-09-13T12:13:20Z) - Supporting search engines with knowledge and context [1.0152838128195467]
In the first part of this thesis, we study how to make structured knowledge more accessible to the user.
In the second part of this thesis, we study how to improve interactive knowledge gathering.
In the final part of this thesis, we focus on search engine support for professional writers in the news domain.
arXiv Detail & Related papers (2021-02-12T20:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.