Incorporating Relevance Feedback for Information-Seeking Retrieval using
Few-Shot Document Re-Ranking
- URL: http://arxiv.org/abs/2210.10695v1
- Date: Wed, 19 Oct 2022 16:19:37 GMT
- Title: Incorporating Relevance Feedback for Information-Seeking Retrieval using
Few-Shot Document Re-Ranking
- Authors: Tim Baumg\"artner, Leonardo F. R. Ribeiro, Nils Reimers, Iryna
Gurevych
- Abstract summary: We introduce a kNN approach that re-ranks documents based on their similarity with the query and the documents the user considers relevant.
To evaluate our different integration strategies, we transform four existing information retrieval datasets into the relevance feedback scenario.
- Score: 56.80065604034095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pairing a lexical retriever with a neural re-ranking model has set
state-of-the-art performance on large-scale information retrieval datasets.
This pipeline covers scenarios like question answering or navigational queries,
however, for information-seeking scenarios, users often provide information on
whether a document is relevant to their query in form of clicks or explicit
feedback. Therefore, in this work, we explore how relevance feedback can be
directly integrated into neural re-ranking models by adopting few-shot and
parameter-efficient learning techniques. Specifically, we introduce a kNN
approach that re-ranks documents based on their similarity with the query and
the documents the user considers relevant. Further, we explore Cross-Encoder
models that we pre-train using meta-learning and subsequently fine-tune for
each query, training only on the feedback documents. To evaluate our different
integration strategies, we transform four existing information retrieval
datasets into the relevance feedback scenario. Extensive experiments
demonstrate that integrating relevance feedback directly in neural re-ranking
models improves their performance, and fusing lexical ranking with our best
performing neural re-ranker outperforms all other methods by 5.2 nDCG@20.
Related papers
- Coarse-Tuning for Ad-hoc Document Retrieval Using Pre-trained Language Models [1.7126893619099555]
Fine-tuning in information retrieval systems requires learning query representations and query-document relations.
This study introduces coarse-tuning as an intermediate learning stage that bridges pre-training and fine-tuning.
We propose Query-Document Pair Prediction (QDPP) for coarse-tuning, which predicts the appropriateness of query-document pairs.
arXiv Detail & Related papers (2024-03-25T16:32:50Z) - Noisy Self-Training with Synthetic Queries for Dense Retrieval [49.49928764695172]
We introduce a novel noisy self-training framework combined with synthetic queries.
Experimental results show that our method improves consistently over existing methods.
Our method is data efficient and outperforms competitive baselines.
arXiv Detail & Related papers (2023-11-27T06:19:50Z) - Zero-shot Composed Text-Image Retrieval [72.43790281036584]
We consider the problem of composed image retrieval (CIR)
It aims to train a model that can fuse multi-modal information, e.g., text and images, to accurately retrieve images that match the query, extending the user's expression ability.
arXiv Detail & Related papers (2023-06-12T17:56:01Z) - Unified Embedding Based Personalized Retrieval in Etsy Search [0.206242362470764]
We propose learning a unified embedding model incorporating graph, transformer and term-based embeddings end to end.
Our personalized retrieval model significantly improves the overall search experience, as measured by a 5.58% increase in search purchase rate and a 2.63% increase in site-wide conversion rate.
arXiv Detail & Related papers (2023-06-07T23:24:50Z) - Improving Sequential Query Recommendation with Immediate User Feedback [6.925738064847176]
We propose an algorithm for next query recommendation in interactive data exploration settings.
We conduct a large-scale experimental study using log files from a popular online literature discovery service.
arXiv Detail & Related papers (2022-05-12T18:19:24Z) - Integrating Semantics and Neighborhood Information with Graph-Driven
Generative Models for Document Retrieval [51.823187647843945]
In this paper, we encode the neighborhood information with a graph-induced Gaussian distribution, and propose to integrate the two types of information with a graph-driven generative model.
Under the approximation, we prove that the training objective can be decomposed into terms involving only singleton or pairwise documents, enabling the model to be trained as efficiently as uncorrelated ones.
arXiv Detail & Related papers (2021-05-27T11:29:03Z) - Monocular Depth Estimation via Listwise Ranking using the Plackett-Luce
Model [15.472533971305367]
In many real-world applications, the relative depth of objects in an image is crucial for scene understanding.
Recent approaches mainly tackle the problem of depth prediction in monocular images by treating the problem as a regression task.
Yet, ranking methods suggest themselves as a natural alternative to regression, and indeed, ranking approaches leveraging pairwise comparisons have shown promising performance on this problem.
arXiv Detail & Related papers (2020-10-25T13:40:10Z) - S^3-Rec: Self-Supervised Learning for Sequential Recommendation with
Mutual Information Maximization [104.87483578308526]
We propose the model S3-Rec, which stands for Self-Supervised learning for Sequential Recommendation.
For our task, we devise four auxiliary self-supervised objectives to learn the correlations among attribute, item, subsequence, and sequence.
Extensive experiments conducted on six real-world datasets demonstrate the superiority of our proposed method over existing state-of-the-art methods.
arXiv Detail & Related papers (2020-08-18T11:44:10Z) - Relation-Guided Representation Learning [53.60351496449232]
We propose a new representation learning method that explicitly models and leverages sample relations.
Our framework well preserves the relations between samples.
By seeking to embed samples into subspace, we show that our method can address the large-scale and out-of-sample problem.
arXiv Detail & Related papers (2020-07-11T10:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.