Query Abandonment Prediction with Recurrent Neural Models of Mouse
Cursor Movements
- URL: http://arxiv.org/abs/2101.09066v1
- Date: Fri, 22 Jan 2021 11:57:04 GMT
- Title: Query Abandonment Prediction with Recurrent Neural Models of Mouse
Cursor Movements
- Authors: Lukas Br\"uckner and Ioannis Arapakis and Luis A. Leiva
- Abstract summary: We show that mouse cursor movements make a valuable, low-cost behavioral signal that can discriminate good and bad abandonment.
Our results can help search providers to gauge user satisfaction for queries without clicks and ultimately contribute to a better understanding of search engine performance.
- Score: 10.088906689243768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most successful search queries do not result in a click if the user can
satisfy their information needs directly on the SERP. Modeling query
abandonment in the absence of click-through data is challenging because search
engines must rely on other behavioral signals to understand the underlying
search intent. We show that mouse cursor movements make a valuable, low-cost
behavioral signal that can discriminate good and bad abandonment. We model
mouse movements on SERPs using recurrent neural nets and explore several data
representations that do not rely on expensive hand-crafted features and do not
depend on a particular SERP structure. We also experiment with data resampling
and augmentation techniques that we adopt for sequential data. Our results can
help search providers to gauge user satisfaction for queries without clicks and
ultimately contribute to a better understanding of search engine performance.
Related papers
- CLARINET: Augmenting Language Models to Ask Clarification Questions for Retrieval [52.134133938779776]
We present CLARINET, a system that asks informative clarification questions by choosing questions whose answers would maximize certainty in the correct candidate.
Our approach works by augmenting a large language model (LLM) to condition on a retrieval distribution, finetuning end-to-end to generate the question that would have maximized the rank of the true candidate at each turn.
arXiv Detail & Related papers (2024-04-28T18:21:31Z) - Zero-shot Retrieval: Augmenting Pre-trained Models with Search Engines [83.65380507372483]
Large pre-trained models can dramatically reduce the amount of task-specific data required to solve a problem, but they often fail to capture domain-specific nuances out of the box.
This paper shows how to leverage recent advances in NLP and multi-modal learning to augment a pre-trained model with search engine retrieval.
arXiv Detail & Related papers (2023-11-29T05:33:28Z) - Behavior Retrieval: Few-Shot Imitation Learning by Querying Unlabeled
Datasets [73.2096288987301]
We propose a simple approach that uses a small amount of downstream expert data to selectively query relevant behaviors from an offline, unlabeled dataset.
We observe that our method learns to query only the relevant transitions to the task, filtering out sub-optimal or task-irrelevant data.
Our simple querying approach outperforms more complex goal-conditioned methods by 20% across simulated and real robotic manipulation tasks from images.
arXiv Detail & Related papers (2023-04-18T05:42:53Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - PROMISSING: Pruning Missing Values in Neural Networks [0.0]
We propose a simple and intuitive yet effective method for pruning missing values (PROMISSING) during learning and inference steps in neural networks.
Our experiments show that PROMISSING results in similar prediction performance compared to various imputation techniques.
arXiv Detail & Related papers (2022-06-03T15:37:27Z) - Graph Enhanced BERT for Query Understanding [55.90334539898102]
query understanding plays a key role in exploring users' search intents and facilitating users to locate their most desired information.
In recent years, pre-trained language models (PLMs) have advanced various natural language processing tasks.
We propose a novel graph-enhanced pre-training framework, GE-BERT, which can leverage both query content and the query graph.
arXiv Detail & Related papers (2022-04-03T16:50:30Z) - Mining Implicit Relevance Feedback from User Behavior for Web Question
Answering [92.45607094299181]
We make the first study to explore the correlation between user behavior and passage relevance.
Our approach significantly improves the accuracy of passage ranking without extra human labeled data.
In practice, this work has proved effective to substantially reduce the human labeling cost for the QA service in a global commercial search engine.
arXiv Detail & Related papers (2020-06-13T07:02:08Z) - Learning Efficient Representations of Mouse Movements to Predict User
Attention [12.259552039796027]
We investigate different representations of mouse cursor movements, including time series, heatmaps, and trajectory-based images.
We build and contrast both recurrent and convolutional neural networks that can predict user attention to direct displays.
Our models are trained over raw mouse cursor data and achieve competitive performance.
arXiv Detail & Related papers (2020-05-30T09:52:26Z) - Eliminating Search Intent Bias in Learning to Rank [0.32228025627337864]
We study how differences in user search intent can influence click activities and determined that there exists a bias between user search intent and the relevance of the document relevance.
We propose a search intent bias hypothesis that can be applied to most existing click models to improve their ability to learn unbiased relevance.
arXiv Detail & Related papers (2020-02-08T17:07:37Z) - Modeling Information Need of Users in Search Sessions [5.172625611483604]
We propose a sequence-to-sequence based neural architecture that leverages the set of past queries issued by users.
Firstly, we employ our model for predicting the words in the current query that are important and would be retained in the next query.
We show that our intuitive strategy of capturing information need can yield superior performance at these tasks on two large real-world search log datasets.
arXiv Detail & Related papers (2020-01-03T15:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.