Information Retrieval Meets Large Language Models: A Strategic Report
from Chinese IR Community
- URL: http://arxiv.org/abs/2307.09751v2
- Date: Thu, 27 Jul 2023 02:32:08 GMT
- Title: Information Retrieval Meets Large Language Models: A Strategic Report
from Chinese IR Community
- Authors: Qingyao Ai, Ting Bai, Zhao Cao, Yi Chang, Jiawei Chen, Zhumin Chen,
Zhiyong Cheng, Shoubin Dong, Zhicheng Dou, Fuli Feng, Shen Gao, Jiafeng Guo,
Xiangnan He, Yanyan Lan, Chenliang Li, Yiqun Liu, Ziyu Lyu, Weizhi Ma, Jun
Ma, Zhaochun Ren, Pengjie Ren, Zhiqiang Wang, Mingwen Wang, Ji-Rong Wen, Le
Wu, Xin Xin, Jun Xu, Dawei Yin, Peng Zhang, Fan Zhang, Weinan Zhang, Min
Zhang and Xiaofei Zhu
- Abstract summary: Large Language Models (LLMs) have demonstrated exceptional capabilities in text understanding, generation, and knowledge inference.
LLMs and humans form a new technical paradigm that is more powerful for information seeking.
To thoroughly discuss the transformative impact of LLMs on IR research, the Chinese IR community conducted a strategic workshop in April 2023.
- Score: 180.28262433004113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The research field of Information Retrieval (IR) has evolved significantly,
expanding beyond traditional search to meet diverse user information needs.
Recently, Large Language Models (LLMs) have demonstrated exceptional
capabilities in text understanding, generation, and knowledge inference,
opening up exciting avenues for IR research. LLMs not only facilitate
generative retrieval but also offer improved solutions for user understanding,
model evaluation, and user-system interactions. More importantly, the
synergistic relationship among IR models, LLMs, and humans forms a new
technical paradigm that is more powerful for information seeking. IR models
provide real-time and relevant information, LLMs contribute internal knowledge,
and humans play a central role of demanders and evaluators to the reliability
of information services. Nevertheless, significant challenges exist, including
computational costs, credibility concerns, domain-specific limitations, and
ethical considerations. To thoroughly discuss the transformative impact of LLMs
on IR research, the Chinese IR community conducted a strategic workshop in
April 2023, yielding valuable insights. This paper provides a summary of the
workshop's outcomes, including the rethinking of IR's core values, the mutual
enhancement of LLMs and IR, the proposal of a novel IR technical paradigm, and
open challenges.
Related papers
- Robust Information Retrieval [77.87996131013546]
robustness of an information retrieval system is increasingly attracting attention.
This tutorial aims to generate broader attention to robustness issues in IR, facilitate an understanding of the relevant literature, and lower the barrier to entry for interested researchers and practitioners.
arXiv Detail & Related papers (2024-06-13T07:44:21Z) - A Survey on RAG Meeting LLMs: Towards Retrieval-Augmented Large Language Models [71.25225058845324]
Large Language Models (LLMs) have demonstrated revolutionary abilities in language understanding and generation.
Retrieval-Augmented Generation (RAG) can offer reliable and up-to-date external knowledge.
RA-LLMs have emerged to harness external and authoritative knowledge bases, rather than relying on the model's internal knowledge.
arXiv Detail & Related papers (2024-05-10T02:48:45Z) - RAG and RAU: A Survey on Retrieval-Augmented Language Model in Natural Language Processing [0.2302001830524133]
This survey paper addresses the absence of a comprehensive overview on Retrieval-Augmented Language Models (RALMs)
The paper discusses the essential components of RALMs, including Retrievers, Language Models, and Augmentations.
RALMs demonstrate utility in a spectrum of tasks, from translation and dialogue systems to knowledge-intensive applications.
arXiv Detail & Related papers (2024-04-30T13:14:51Z) - Self-Retrieval: End-to-End Information Retrieval with One Large Language Model [97.71181484082663]
We introduce Self-Retrieval, a novel end-to-end LLM-driven information retrieval architecture.
Self-Retrieval internalizes the retrieval corpus through self-supervised learning, transforms the retrieval process into sequential passage generation, and performs relevance assessment for reranking.
arXiv Detail & Related papers (2024-02-23T18:45:35Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z) - Search-in-the-Chain: Interactively Enhancing Large Language Models with
Search for Knowledge-intensive Tasks [121.74957524305283]
This paper proposes a novel framework named textbfSearch-in-the-Chain (SearChain) for the interaction between Information Retrieval (IR) and Large Language Model (LLM)
Experiments show that SearChain outperforms state-of-the-art baselines on complex knowledge-intensive tasks.
arXiv Detail & Related papers (2023-04-28T10:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.