Modeling Information Need of Users in Search Sessions
- URL: http://arxiv.org/abs/2001.00861v1
- Date: Fri, 3 Jan 2020 15:25:45 GMT
- Title: Modeling Information Need of Users in Search Sessions
- Authors: Kishaloy Halder, Heng-Tze Cheng, Ellie Ka In Chio, Georgios Roumpos,
Tao Wu, Ritesh Agarwal
- Abstract summary: We propose a sequence-to-sequence based neural architecture that leverages the set of past queries issued by users.
Firstly, we employ our model for predicting the words in the current query that are important and would be retained in the next query.
We show that our intuitive strategy of capturing information need can yield superior performance at these tasks on two large real-world search log datasets.
- Score: 5.172625611483604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Users issue queries to Search Engines, and try to find the desired
information in the results produced. They repeat this process if their
information need is not met at the first place. It is crucial to identify the
important words in a query that depict the actual information need of the user
and will determine the course of a search session. To this end, we propose a
sequence-to-sequence based neural architecture that leverages the set of past
queries issued by users, and results that were explored by them. Firstly, we
employ our model for predicting the words in the current query that are
important and would be retained in the next query. Additionally, as a
downstream application of our model, we evaluate it on the widely popular task
of next query suggestion. We show that our intuitive strategy of capturing
information need can yield superior performance at these tasks on two large
real-world search log datasets.
Related papers
- QueryBuilder: Human-in-the-Loop Query Development for Information Retrieval [12.543590253664492]
We present a novel, interactive system called $textitQueryBuilder$.
It allows a novice, English-speaking user to create queries with a small amount of effort.
It rapidly develops cross-lingual information retrieval queries corresponding to the user's information needs.
arXiv Detail & Related papers (2024-09-07T00:46:58Z) - Query-oriented Data Augmentation for Session Search [71.84678750612754]
We propose query-oriented data augmentation to enrich search logs and empower the modeling.
We generate supplemental training pairs by altering the most important part of a search context.
We develop several strategies to alter the current query, resulting in new training data with varying degrees of difficulty.
arXiv Detail & Related papers (2024-07-04T08:08:33Z) - Database-Augmented Query Representation for Information Retrieval [59.57065228857247]
We present a novel retrieval framework called Database-Augmented Query representation (DAQu)
DAQu augments the original query with various (query-related) metadata across multiple tables.
We validate DAQu in diverse retrieval scenarios that can incorporate metadata from the relational database.
arXiv Detail & Related papers (2024-06-23T05:02:21Z) - End-to-end Knowledge Retrieval with Multi-modal Queries [50.01264794081951]
ReMuQ requires a system to retrieve knowledge from a large corpus by integrating contents from both text and image queries.
We introduce a retriever model ReViz'' that can directly process input text and images to retrieve relevant knowledge in an end-to-end fashion.
We demonstrate superior performance in retrieval on two datasets under zero-shot settings.
arXiv Detail & Related papers (2023-06-01T08:04:12Z) - Recommender Systems with Generative Retrieval [58.454606442670034]
We propose a novel generative retrieval approach, where the retrieval model autoregressively decodes the identifiers of the target candidates.
To that end, we create semantically meaningful of codewords to serve as a Semantic ID for each item.
We show that recommender systems trained with the proposed paradigm significantly outperform the current SOTA models on various datasets.
arXiv Detail & Related papers (2023-05-08T21:48:17Z) - Graph Enhanced BERT for Query Understanding [55.90334539898102]
query understanding plays a key role in exploring users' search intents and facilitating users to locate their most desired information.
In recent years, pre-trained language models (PLMs) have advanced various natural language processing tasks.
We propose a novel graph-enhanced pre-training framework, GE-BERT, which can leverage both query content and the query graph.
arXiv Detail & Related papers (2022-04-03T16:50:30Z) - Session-Aware Query Auto-completion using Extreme Multi-label Ranking [61.753713147852125]
We take the novel approach of modeling session-aware query auto-completion as an e Multi-Xtreme Ranking (XMR) problem.
We adapt a popular XMR algorithm for this purpose by proposing several modifications to the key steps in the algorithm.
Our approach meets the stringent latency requirements for auto-complete systems while leveraging session information in making suggestions.
arXiv Detail & Related papers (2020-12-09T17:56:22Z) - Deep Search Query Intent Understanding [17.79430887321982]
This paper aims to provide a comprehensive learning framework for modeling query intent under different stages of a search.
We focus on the design for 1) predicting users' intents as they type in queries on-the-fly in typeahead search using character-level models; and 2) accurate word-level intent prediction models for complete queries.
arXiv Detail & Related papers (2020-08-15T18:19:56Z) - Session-based Suggestion of Topics for Geographic Exploratory Search [5.482532589225552]
We develop a session-based suggestion model that proposes concepts as a "you might also be interested in" function.
Our model can be applied to incrementally generate suggestions in interactive search.
arXiv Detail & Related papers (2020-03-25T10:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.