Long Document Re-ranking with Modular Re-ranker
- URL: http://arxiv.org/abs/2205.04275v1
- Date: Mon, 9 May 2022 13:44:02 GMT
- Title: Long Document Re-ranking with Modular Re-ranker
- Authors: Luyu Gao, Jamie Callan
- Abstract summary: Long document re-ranking has been a challenging problem for neural re-rankers based on deep language models like BERT.
We propose to model full query-to-document interaction, leveraging the attention operation and modular Transformer re-ranker framework.
- Score: 15.935423344245363
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long document re-ranking has been a challenging problem for neural re-rankers
based on deep language models like BERT. Early work breaks the documents into
short passage-like chunks. These chunks are independently mapped to scalar
scores or latent vectors, which are then pooled into a final relevance score.
These encode-and-pool methods however inevitably introduce an information
bottleneck: the low dimension representations. In this paper, we propose
instead to model full query-to-document interaction, leveraging the attention
operation and modular Transformer re-ranker framework. First, document chunks
are encoded independently with an encoder module. An interaction module then
encodes the query and performs joint attention from the query to all document
chunk representations. We demonstrate that the model can use this new degree of
freedom to aggregate important information from the entire document. Our
experiments show that this design produces effective re-ranking on two
classical IR collections Robust04 and ClueWeb09, and a large-scale supervised
collection MS-MARCO document ranking.
Related papers
- SitEmb-v1.5: Improved Context-Aware Dense Retrieval for Semantic Association and Long Story Comprehension [77.93156509994994]
We show how to represent short chunks in a way that is conditioned on a broader context window to enhance retrieval performance.<n>Existing embedding models are not well-equipped to encode such situated context effectively.<n>Our method substantially outperforms state-of-the-art embedding models.
arXiv Detail & Related papers (2025-08-03T23:59:31Z) - The Surprising Soupability of Documents in State Space Models [28.95633840848728]
Inspired by model souping, we propose a strategy where documents are encoded independently and their representations are pooled.<n>We finetune Mamba2 models to produce soupable representations and find that they support multi-hop QA, sparse retrieval, and long-document reasoning with strong accuracy.<n>On HotpotQA, souping ten independently encoded documents nearly matches the performance of a cross-encoder trained on the same inputs.
arXiv Detail & Related papers (2025-05-29T22:13:21Z) - Doc-CoB: Enhancing Multi-Modal Document Understanding with Visual Chain-of-Boxes Reasoning [12.17399365931]
Existing one-pass MLLMs process entire document images without considering query relevance.<n>Inspired by the human coarse-to-fine reading pattern, we introduce Doc-CoB, a simple-yet-effective mechanism that integrates human-style visual reasoning into MLLM.<n>Our method allows the model to autonomously select the set of regions most relevant to the query, and then focus attention on them for further understanding.
arXiv Detail & Related papers (2025-05-24T08:53:05Z) - M-DocSum: Do LVLMs Genuinely Comprehend Interleaved Image-Text in Document Summarization? [49.53982792497275]
We investigate whether Large Vision-Language Models (LVLMs) genuinely comprehend interleaved image-text in the document.
Existing document understanding benchmarks often assess LVLMs using question-answer formats.
We introduce a novel and challenging Multimodal Document Summarization Benchmark (M-DocSum-Bench)
M-DocSum-Bench comprises 500 high-quality arXiv papers, along with interleaved multimodal summaries aligned with human preferences.
arXiv Detail & Related papers (2025-03-27T07:28:32Z) - Two are better than one: Context window extension with multi-grained self-injection [111.1376461868317]
SharedLLM is a novel approach grounded in the design philosophy of multi-grained context compression and query-aware information retrieval.
We introduce a specialized tree-style data structure to efficiently encode, store and retrieve multi-grained contextual information for text chunks.
arXiv Detail & Related papers (2024-10-25T06:08:59Z) - Efficient Document Ranking with Learnable Late Interactions [73.41976017860006]
Cross-Encoder (CE) and Dual-Encoder (DE) models are two fundamental approaches for query-document relevance in information retrieval.
To predict relevance, CE models use joint query-document embeddings, while DE models maintain factorized query and document embeddings.
Recently, late-interaction models have been proposed to realize more favorable latency-quality tradeoffs, by using a DE structure followed by a lightweight scorer.
arXiv Detail & Related papers (2024-06-25T22:50:48Z) - Learning Diverse Document Representations with Deep Query Interactions
for Dense Retrieval [79.37614949970013]
We propose a new dense retrieval model which learns diverse document representations with deep query interactions.
Our model encodes each document with a set of generated pseudo-queries to get query-informed, multi-view document representations.
arXiv Detail & Related papers (2022-08-08T16:00:55Z) - Long Document Summarization with Top-down and Bottom-up Inference [113.29319668246407]
We propose a principled inference framework to improve summarization models on two aspects.
Our framework assumes a hierarchical latent structure of a document where the top-level captures the long range dependency.
We demonstrate the effectiveness of the proposed framework on a diverse set of summarization datasets.
arXiv Detail & Related papers (2022-03-15T01:24:51Z) - End-to-End Object Detection with Transformers [88.06357745922716]
We present a new method that views object detection as a direct set prediction problem.
Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components.
The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss.
arXiv Detail & Related papers (2020-05-26T17:06:38Z) - Recurrent Chunking Mechanisms for Long-Text Machine Reading
Comprehension [59.80926970481975]
We study machine reading comprehension (MRC) on long texts.
A model takes as inputs a lengthy document and a question and then extracts a text span from the document as an answer.
We propose to let a model learn to chunk in a more flexible way via reinforcement learning.
arXiv Detail & Related papers (2020-05-16T18:08:58Z) - Beyond 512 Tokens: Siamese Multi-depth Transformer-based Hierarchical
Encoder for Long-Form Document Matching [28.190001111358438]
We propose a Siamese Multi-depth Transformer-based SMITH for long-form document matching.
Our model contains several innovations to adapt self-attention models for longer text input.
We will open source a Wikipedia based benchmark dataset, code and a pre-trained checkpoint to accelerate future research on long-form document matching.
arXiv Detail & Related papers (2020-04-26T07:04:08Z) - Neural Abstractive Summarization with Structural Attention [31.50918718905953]
We present a hierarchical encoder based on structural attention to model such inter-sentence and inter-document dependencies.
We show that our proposed model achieves significant improvement over the baselines in both single and multi-document summarization settings.
arXiv Detail & Related papers (2020-04-21T03:39:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.