Triangular Bidword Generation for Sponsored Search Auction
- URL: http://arxiv.org/abs/2101.11349v1
- Date: Wed, 27 Jan 2021 12:25:22 GMT
- Title: Triangular Bidword Generation for Sponsored Search Auction
- Authors: Zhenqiao Song, Jiaze Chen, Hao Zhou and Lei Li
- Abstract summary: We propose a triangular bidword generation model (TRIDENT), which takes the high-quality data of paired query, advertisement> as a supervision signal.
Our proposed model is simple yet effective: by using bidword as the bridge between search query and advertisement, the generation of search query, advertisement and bidword can be jointly learned.
- Score: 15.260540147678396
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sponsored search auction is a crucial component of modern search engines. It
requires a set of candidate bidwords that advertisers can place bids on.
Existing methods generate bidwords from search queries or advertisement
content. However, they suffer from the data noise in <query, bidword> and
<advertisement, bidword> pairs. In this paper, we propose a triangular bidword
generation model (TRIDENT), which takes the high-quality data of paired <query,
advertisement> as a supervision signal to indirectly guide the bidword
generation process. Our proposed model is simple yet effective: by using
bidword as the bridge between search query and advertisement, the generation of
search query, advertisement and bidword can be jointly learned in the
triangular training framework. This alleviates the problem that the training
data of bidword may be noisy. Experimental results, including automatic and
human evaluations, show that our proposed TRIDENT can generate relevant and
diverse bidwords for both search queries and advertisements. Our evaluation on
online real data validates the effectiveness of the TRIDENT's generated
bidwords for product search.
Related papers
- Query-oriented Data Augmentation for Session Search [71.84678750612754]
We propose query-oriented data augmentation to enrich search logs and empower the modeling.
We generate supplemental training pairs by altering the most important part of a search context.
We develop several strategies to alter the current query, resulting in new training data with varying degrees of difficulty.
arXiv Detail & Related papers (2024-07-04T08:08:33Z) - Large Search Model: Redefining Search Stack in the Era of LLMs [63.503320030117145]
We introduce a novel conceptual framework called large search model, which redefines the conventional search stack by unifying search tasks with one large language model (LLM)
All tasks are formulated as autoregressive text generation problems, allowing for the customization of tasks through the use of natural language prompts.
This proposed framework capitalizes on the strong language understanding and reasoning capabilities of LLMs, offering the potential to enhance search result quality while simultaneously simplifying the existing cumbersome search stack.
arXiv Detail & Related papers (2023-10-23T05:52:09Z) - Align before Search: Aligning Ads Image to Text for Accurate Cross-Modal
Sponsored Search [27.42717207107]
Cross-Modal sponsored search displays multi-modal advertisements (ads) when consumers look for desired products by natural language queries in search engines.
The ability to align ads-specific information in both images and texts is crucial for accurate and flexible sponsored search.
We propose a simple alignment network for explicitly mapping fine-grained visual parts in ads images to the corresponding text.
arXiv Detail & Related papers (2023-09-28T03:43:57Z) - SSP: Self-Supervised Post-training for Conversational Search [63.28684982954115]
We propose fullmodel (model) which is a new post-training paradigm with three self-supervised tasks to efficiently initialize the conversational search model.
To verify the effectiveness of our proposed method, we apply the conversational encoder post-trained by model on the conversational search task using two benchmark datasets: CAsT-19 and CAsT-20.
arXiv Detail & Related papers (2023-07-02T13:36:36Z) - Multiview Identifiers Enhanced Generative Retrieval [78.38443356800848]
generative retrieval generates identifier strings of passages as the retrieval target.
We propose a new type of identifier, synthetic identifiers, that are generated based on the content of a passage.
Our proposed approach performs the best in generative retrieval, demonstrating its effectiveness and robustness.
arXiv Detail & Related papers (2023-05-26T06:50:21Z) - Deep Reinforcement Agent for Efficient Instant Search [14.086339486783018]
We propose to address the load issue by identifying tokens that are semantically more salient towards retrieving relevant documents.
We train a reinforcement agent that interacts directly with the search engine and learns to predict the word's importance.
A novel evaluation framework is presented to study the trade-off between the number of triggered searches and the system's performance.
arXiv Detail & Related papers (2022-03-17T22:47:15Z) - Neural Extractive Search [53.15076679818303]
Domain experts often need to extract structured information from large corpora.
We advocate for a search paradigm called extractive search'', in which a search query is enriched with capture-slots.
We show how the recall can be improved using neural retrieval and alignment.
arXiv Detail & Related papers (2021-06-08T18:03:31Z) - Diversity driven Query Rewriting in Search Advertising [1.5289756643078838]
generative retrieval models have been shown to be effective at the task of generating such query rewrites.
We introduce CLOVER, a framework to generate both high-quality and diverse rewrites.
We empirically show the effectiveness of our proposed approach through offline experiments on search queries across geographies spanning three major languages.
arXiv Detail & Related papers (2021-06-07T17:30:45Z) - ProphetNet-Ads: A Looking Ahead Strategy for Generative Retrieval Models
in Sponsored Search Engine [123.65646903493614]
Generative retrieval models generate outputs token by token on a path of the target library prefix tree (Trie)
We analyze these problems and propose a looking ahead strategy for generative retrieval models named ProphetNet-Ads.
Compared with Trie-based LSTM generative retrieval model proposed recently, our single model result and integrated result improve the recall by 15.58% and 18.8% respectively with beam size 5.
arXiv Detail & Related papers (2020-10-21T07:03:20Z) - Query-Variant Advertisement Text Generation with Association Knowledge [21.18443320935013]
Traditional text generation methods tend to focus on the general searching needs with high frequency.
We propose a query-variant advertisement text generation task that aims to generate candidate advertisement texts for different web search queries.
Our model can make use of various personalized needs in queries and generate query-variant advertisement texts.
arXiv Detail & Related papers (2020-04-14T12:04:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.