On the Robustness of Generative Retrieval Models: An Out-of-Distribution
Perspective
- URL: http://arxiv.org/abs/2306.12756v1
- Date: Thu, 22 Jun 2023 09:18:52 GMT
- Title: On the Robustness of Generative Retrieval Models: An Out-of-Distribution
Perspective
- Authors: Yu-An Liu, Ruqing Zhang, Jiafeng Guo, Wei Chen, Xueqi Cheng
- Abstract summary: We study the robustness of generative retrieval models against dense retrieval models.
The empirical results indicate that the OOD robustness of generative retrieval models requires enhancement.
- Score: 65.16259505602807
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, we have witnessed generative retrieval increasingly gaining
attention in the information retrieval (IR) field, which retrieves documents by
directly generating their identifiers. So far, much effort has been devoted to
developing effective generative retrieval models. There has been less attention
paid to the robustness perspective. When a new retrieval paradigm enters into
the real-world application, it is also critical to measure the
out-of-distribution (OOD) generalization, i.e., how would generative retrieval
models generalize to new distributions. To answer this question, firstly, we
define OOD robustness from three perspectives in retrieval problems: 1) The
query variations; 2) The unforeseen query types; and 3) The unforeseen tasks.
Based on this taxonomy, we conduct empirical studies to analyze the OOD
robustness of several representative generative retrieval models against dense
retrieval models. The empirical results indicate that the OOD robustness of
generative retrieval models requires enhancement. We hope studying the OOD
robustness of generative retrieval models would be advantageous to the IR
community.
Related papers
- Bridging Search and Recommendation in Generative Retrieval: Does One Task Help the Other? [9.215695600542249]
Generative retrieval for search and recommendation is a promising paradigm for retrieving items.
These generative systems can play a crucial role in centralizing a variety of Information Retrieval (IR) tasks in a single model.
This paper investigates whether and when such a unified approach can outperform task-specific models in the IR tasks of search and recommendation.
arXiv Detail & Related papers (2024-10-22T08:49:43Z) - Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective [111.58315434849047]
robustness of neural information retrieval models (IR) models has garnered significant attention.
We view the robustness of IR to be a multifaceted concept, emphasizing its necessity against adversarial attacks, out-of-distribution (OOD) scenarios and performance variance.
We provide an in-depth discussion of existing methods, datasets, and evaluation metrics, shedding light on challenges and future directions in the era of large language models.
arXiv Detail & Related papers (2024-07-09T16:07:01Z) - Think-then-Act: A Dual-Angle Evaluated Retrieval-Augmented Generation [3.2134014920850364]
Large language models (LLMs) often face challenges such as temporal misalignment and generating hallucinatory content.
We propose a dual-angle evaluated retrieval-augmented generation framework textitThink-then-Act'
arXiv Detail & Related papers (2024-06-18T20:51:34Z) - Optimizing OOD Detection in Molecular Graphs: A Novel Approach with Diffusion Models [71.39421638547164]
We propose to detect OOD molecules by adopting an auxiliary diffusion model-based framework, which compares similarities between input molecules and reconstructed graphs.
Due to the generative bias towards reconstructing ID training samples, the similarity scores of OOD molecules will be much lower to facilitate detection.
Our research pioneers an approach of Prototypical Graph Reconstruction for Molecular OOD Detection, dubbed as PGR-MOOD and hinges on three innovations.
arXiv Detail & Related papers (2024-04-24T03:25:53Z) - Model Reprogramming Outperforms Fine-tuning on Out-of-distribution Data in Text-Image Encoders [56.47577824219207]
In this paper, we unveil the hidden costs associated with intrusive fine-tuning techniques.
We introduce a new model reprogramming approach for fine-tuning, which we name Reprogrammer.
Our empirical evidence reveals that Reprogrammer is less intrusive and yields superior downstream models.
arXiv Detail & Related papers (2024-03-16T04:19:48Z) - A Survey on Evaluation of Out-of-Distribution Generalization [41.39827887375374]
Out-of-Distribution (OOD) generalization is a complex and fundamental problem.
This paper serves as the first effort to conduct a comprehensive review of OOD evaluation.
We categorize existing research into three paradigms: OOD performance testing, OOD performance prediction, and OOD intrinsic property characterization.
arXiv Detail & Related papers (2024-03-04T09:30:35Z) - Pseudo-OOD training for robust language models [78.15712542481859]
OOD detection is a key component of a reliable machine-learning model for any industry-scale application.
We propose POORE - POsthoc pseudo-Ood REgularization, that generates pseudo-OOD samples using in-distribution (IND) data.
We extensively evaluate our framework on three real-world dialogue systems, achieving new state-of-the-art in OOD detection.
arXiv Detail & Related papers (2022-10-17T14:32:02Z) - Are Sample-Efficient NLP Models More Robust? [90.54786862811183]
We investigate the relationship between sample efficiency (amount of data needed to reach a given ID accuracy) and robustness (how models fare on OOD evaluation)
We find that higher sample efficiency is only correlated with better average OOD robustness on some modeling interventions and tasks, but not others.
These results suggest that general-purpose methods for improving sample efficiency are unlikely to yield universal OOD robustness improvements, since such improvements are highly dataset- and task-dependent.
arXiv Detail & Related papers (2022-10-12T17:54:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.