Gen-IR @ SIGIR 2023: The First Workshop on Generative Information
Retrieval
- URL: http://arxiv.org/abs/2306.02887v2
- Date: Tue, 13 Jun 2023 15:20:13 GMT
- Title: Gen-IR @ SIGIR 2023: The First Workshop on Generative Information
Retrieval
- Authors: Gabriel B\'en\'edict, Ruqing Zhang, Donald Metzler
- Abstract summary: The goal of this workshop is to focus on Generative IR techniques like document retrieval and direct Grounded Answer Generation.
The format of the workshop is interactive, including roundtable and keynote sessions and tends to avoid the one-sided dialogue of a mini-conference.
- Score: 32.45182506899627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative information retrieval (IR) has experienced substantial growth
across multiple research communities (e.g., information retrieval, computer
vision, natural language processing, and machine learning), and has been highly
visible in the popular press. Theoretical, empirical, and actual user-facing
products have been released that retrieve documents (via generation) or
directly generate answers given an input request. We would like to investigate
whether end-to-end generative models are just another trend or, as some claim,
a paradigm change for IR. This necessitates new metrics, theoretical grounding,
evaluation methods, task definitions, models, user interfaces, etc. The goal of
this workshop (https://coda.io/@sigir/gen-ir) is to focus on previously
explored Generative IR techniques like document retrieval and direct Grounded
Answer Generation, while also offering a venue for the discussion and
exploration of how Generative IR can be applied to new domains like
recommendation systems, summarization, etc. The format of the workshop is
interactive, including roundtable and keynote sessions and tends to avoid the
one-sided dialogue of a mini-conference.
Related papers
- From Web Search towards Agentic Deep Research: Incentivizing Search with Reasoning Agents [96.65646344634524]
Large Language Models (LLMs), endowed with reasoning and agentic capabilities, are ushering in a new paradigm termed Agentic Deep Research.<n>We trace the evolution from static web search to interactive, agent-based systems that plan, explore, and learn.<n>We demonstrate that Agentic Deep Research not only significantly outperforms existing approaches, but is also poised to become the dominant paradigm for future information seeking.
arXiv Detail & Related papers (2025-06-23T17:27:19Z) - ImpRAG: Retrieval-Augmented Generation with Implicit Queries [49.510101132093396]
ImpRAG is a query-free RAG system that integrates retrieval and generation into a unified model.<n>We show that ImpRAG achieves 3.6-11.5 improvements in exact match scores on unseen tasks with diverse formats.
arXiv Detail & Related papers (2025-06-02T21:38:21Z) - Exploring new Approaches for Information Retrieval through Natural Language Processing [0.0]
This review paper explores recent advancements and emerging approaches in Information Retrieval (IR) applied to Natural Language Processing (NLP)<n>We examine traditional IR models such as Boolean, vector space, probabilistic, and inference network models, and highlight modern techniques including deep learning, reinforcement learning, and pretrained transformer models like BERT.<n>A comparative analysis of sparse, dense, and hybrid retrieval methods is presented, along with applications in web search engines, cross-language IR, argument mining, private information retrieval, and hate speech detection.
arXiv Detail & Related papers (2025-05-04T17:37:26Z) - A Multi-Agent Perspective on Modern Information Retrieval [12.228832858396368]
The rise of large language models (LLMs) has introduced a new era in information retrieval (IR)
This shift challenges some long-standing IR paradigms and calls for a reassessment of both theoretical frameworks and practical methodologies.
We advocate for a multi-agent perspective to better capture the complex interactions between query agents, document agents, and ranker agents.
arXiv Detail & Related papers (2025-02-20T18:17:26Z) - Agentic Information Retrieval [21.741669515186146]
We introduce Agentic Information Retrieval (Agentic IR), a novel IR paradigm shaped by the capabilities of large language models (LLMs)
We discuss three types of cutting-edge applications of agentic IR and the challenges faced.
arXiv Detail & Related papers (2024-10-13T03:45:24Z) - Robust Neural Information Retrieval: An Adversarial and Out-of-distribution Perspective [111.58315434849047]
robustness of neural information retrieval models (IR) models has garnered significant attention.
We view the robustness of IR to be a multifaceted concept, emphasizing its necessity against adversarial attacks, out-of-distribution (OOD) scenarios and performance variance.
We provide an in-depth discussion of existing methods, datasets, and evaluation metrics, shedding light on challenges and future directions in the era of large language models.
arXiv Detail & Related papers (2024-07-09T16:07:01Z) - A Survey of Generative Search and Recommendation in the Era of Large Language Models [125.26354486027408]
generative search (retrieval) and recommendation aims to address the matching problem in a generative manner.
Superintelligent generative large language models have sparked a new paradigm in search and recommendation.
arXiv Detail & Related papers (2024-04-25T17:58:17Z) - From Matching to Generation: A Survey on Generative Information Retrieval [21.56093567336119]
generative information retrieval (GenIR) has emerged as a novel paradigm, gaining increasing attention in recent years.
This paper aims to systematically review the latest research progress in GenIR.
arXiv Detail & Related papers (2024-04-23T09:05:37Z) - Large Language Models for Information Retrieval: A Survey [58.30439850203101]
Information retrieval has evolved from term-based methods to its integration with advanced neural models.
Recent research has sought to leverage large language models (LLMs) to improve IR systems.
We delve into the confluence of LLMs and IR systems, including crucial aspects such as query rewriters, retrievers, rerankers, and readers.
arXiv Detail & Related papers (2023-08-14T12:47:22Z) - Resources for Brewing BEIR: Reproducible Reference Models and an
Official Leaderboard [47.73060223236792]
BEIR is a benchmark dataset for evaluation of information retrieval models across 18 different domain/task combinations.
Our work addresses two shortcomings that prevent the benchmark from achieving its full potential.
arXiv Detail & Related papers (2023-06-13T00:26:18Z) - Enhancing Retrieval-Augmented Large Language Models with Iterative
Retrieval-Generation Synergy [164.83371924650294]
We show that strong performance can be achieved by a method we call Iter-RetGen, which synergizes retrieval and generation in an iterative manner.
A model output shows what might be needed to finish a task, and thus provides an informative context for retrieving more relevant knowledge.
Iter-RetGen processes all retrieved knowledge as a whole and largely preserves the flexibility in generation without structural constraints.
arXiv Detail & Related papers (2023-05-24T16:17:36Z) - Neural Approaches to Conversational Information Retrieval [94.77863916314979]
A conversational information retrieval (CIR) system is an information retrieval (IR) system with a conversational interface.
Recent progress in deep learning has brought tremendous improvements in natural language processing (NLP) and conversational AI.
This book surveys recent advances in CIR, focusing on neural approaches that have been developed in the last few years.
arXiv Detail & Related papers (2022-01-13T19:04:59Z) - Beyond [CLS] through Ranking by Generation [22.27275853263564]
We revisit the generative framework for information retrieval.
We show that our generative approaches are as effective as state-of-the-art semantic similarity-based discriminative models for the answer selection task.
arXiv Detail & Related papers (2020-10-06T22:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.