I3: Intent-Introspective Retrieval Conditioned on Instructions
- URL: http://arxiv.org/abs/2308.10025v2
- Date: Thu, 25 Apr 2024 15:46:32 GMT
- Title: I3: Intent-Introspective Retrieval Conditioned on Instructions
- Authors: Kaihang Pan, Juncheng Li, Wenjie Wang, Hao Fei, Hongye Song, Wei Ji, Jun Lin, Xiaozhong Liu, Tat-Seng Chua, Siliang Tang,
- Abstract summary: I3 is a unified retrieval system that performs Intent-Introspective retrieval across various tasks conditioned on Instructions without task-specific training.
I3 incorporates a pluggable introspector in a parameter-isolated manner to comprehend specific retrieval intents.
It utilizes extensive LLM-generated data to train I3 phase-by-phase, embodying two key designs: progressive structure pruning and drawback-based data refinement.
- Score: 83.91776238599824
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies indicate that dense retrieval models struggle to perform well on a wide variety of retrieval tasks that lack dedicated training data, as different retrieval tasks often entail distinct search intents. To address this challenge, in this work we leverage instructions to flexibly describe retrieval intents and introduce I3, a unified retrieval system that performs Intent-Introspective retrieval across various tasks, conditioned on Instructions without any task-specific training. I3 innovatively incorporates a pluggable introspector in a parameter-isolated manner to comprehend specific retrieval intents by jointly reasoning over the input query and instruction, and seamlessly integrates the introspected intent into the original retrieval model for intent-aware retrieval. Furthermore, we propose progressively-pruned intent learning. It utilizes extensive LLM-generated data to train I3 phase-by-phase, embodying two key designs: progressive structure pruning and drawback extrapolation-based data refinement. Extensive experiments show that in the BEIR benchmark, I3 significantly outperforms baseline methods designed with task-specific retrievers, achieving state-of-the-art zero-shot performance without any task-specific tuning.
Related papers
- Instruct-ReID++: Towards Universal Purpose Instruction-Guided Person Re-identification [62.894790379098005]
We propose a novel instruct-ReID task that requires the model to retrieve images according to the given image or language instructions.
Instruct-ReID is the first exploration of a general ReID setting, where existing 6 ReID tasks can be viewed as special cases by assigning different instructions.
We propose a novel baseline model, IRM, with an adaptive triplet loss to handle various retrieval tasks within a unified framework.
arXiv Detail & Related papers (2024-05-28T03:35:46Z) - M3: A Multi-Task Mixed-Objective Learning Framework for Open-Domain Multi-Hop Dense Sentence Retrieval [12.277521531556852]
M3 is a novel Multi-hop dense sentence retrieval system built upon a novel Multi-task Mixed-objective approach for dense text representation learning.
Our approach yields state-of-the-art performance on a large-scale open-domain fact verification benchmark dataset, FEVER.
arXiv Detail & Related papers (2024-03-21T01:52:07Z) - RA-ISF: Learning to Answer and Understand from Retrieval Augmentation via Iterative Self-Feedback [19.28222902440827]
Large language models (LLMs) demonstrate exceptional performance in numerous tasks but still heavily rely on knowledge stored in their parameters.
Retrieval-augmented generation (RAG) methods address this issue by integrating external knowledge.
We propose Retrieval Augmented Iterative Self-Feedback (RA-ISF), a framework that iteratively decomposes tasks and processes them in three submodules to enhance the model's problem-solving capabilities.
arXiv Detail & Related papers (2024-03-11T16:01:05Z) - INSTRUCTIR: A Benchmark for Instruction Following of Information
Retrieval Models [32.16908034520376]
retrievers often only prioritize query information without delving into the users' intended search context.
We propose a novel benchmark,INSTRUCTIR, specifically designed to evaluate instruction-following ability in information retrieval tasks.
We observe that retrievers fine-tuned to follow task-style instructions, such as INSTRUCTOR, can underperform compared to their non-instruction-tuned counterparts.
arXiv Detail & Related papers (2024-02-22T06:59:50Z) - Unified Demonstration Retriever for In-Context Learning [56.06473069923567]
Unified Demonstration Retriever (textbfUDR) is a single model to retrieve demonstrations for a wide range of tasks.
We propose a multi-task list-wise ranking training framework, with an iterative mining strategy to find high-quality candidates.
Experiments on 30+ tasks across 13 task families and multiple data domains show that UDR significantly outperforms baselines.
arXiv Detail & Related papers (2023-05-07T16:07:11Z) - QAID: Question Answering Inspired Few-shot Intent Detection [5.516275800944541]
We reformulate intent detection as a question-answering retrieval task by treating utterances and intent names as questions and answers.
Our results on three few-shot intent detection benchmarks achieve state-of-the-art performance.
arXiv Detail & Related papers (2023-03-02T21:35:15Z) - Task-aware Retrieval with Instructions [91.87694020194316]
We study the problem of retrieval with instructions, where users of a retrieval system explicitly describe their intent along with their queries.
We present TART, a multi-task retrieval system trained on the diverse retrieval tasks with instructions.
TART shows strong capabilities to adapt to a new task via instructions and advances the state of the art on two zero-shot retrieval benchmarks.
arXiv Detail & Related papers (2022-11-16T23:13:22Z) - CINS: Comprehensive Instruction for Few-shot Learning in Task-oriented
Dialog Systems [56.302581679816775]
This paper proposes Comprehensive Instruction (CINS) that exploits PLMs with task-specific instructions.
We design a schema (definition, constraint, prompt) of instructions and their customized realizations for three important downstream tasks in ToD.
Experiments are conducted on these ToD tasks in realistic few-shot learning scenarios with small validation data.
arXiv Detail & Related papers (2021-09-10T03:23:06Z) - Detecting Ongoing Events Using Contextual Word and Sentence Embeddings [110.83289076967895]
This paper introduces the Ongoing Event Detection (OED) task.
The goal is to detect ongoing event mentions only, as opposed to historical, future, hypothetical, or other forms or events that are neither fresh nor current.
Any application that needs to extract structured information about ongoing events from unstructured texts can take advantage of an OED system.
arXiv Detail & Related papers (2020-07-02T20:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.