How Do LLM-Generated Texts Impact Term-Based Retrieval Models?
- URL: http://arxiv.org/abs/2508.17715v1
- Date: Mon, 25 Aug 2025 06:43:27 GMT
- Title: How Do LLM-Generated Texts Impact Term-Based Retrieval Models?
- Authors: Wei Huang, Keping Bi, Yinqiong Cai, Wei Chen, Jiafeng Guo, Xueqi Cheng,
- Abstract summary: This paper investigates the influence of large language models (LLMs) on term-based retrieval models.<n>Our linguistic analysis reveals that LLM-generated texts exhibit smoother high-frequency and steeper low-frequency Zipf slopes.<n>Our study further explores whether term-based retrieval models demonstrate source bias, concluding that these models prioritize documents whose term distributions closely correspond to those of the queries.
- Score: 76.92519309816008
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As more content generated by large language models (LLMs) floods into the Internet, information retrieval (IR) systems now face the challenge of distinguishing and handling a blend of human-authored and machine-generated texts. Recent studies suggest that neural retrievers may exhibit a preferential inclination toward LLM-generated content, while classic term-based retrievers like BM25 tend to favor human-written documents. This paper investigates the influence of LLM-generated content on term-based retrieval models, which are valued for their efficiency and robust generalization across domains. Our linguistic analysis reveals that LLM-generated texts exhibit smoother high-frequency and steeper low-frequency Zipf slopes, higher term specificity, and greater document-level diversity. These traits are aligned with LLMs being trained to optimize reader experience through diverse and precise expressions. Our study further explores whether term-based retrieval models demonstrate source bias, concluding that these models prioritize documents whose term distributions closely correspond to those of the queries, rather than displaying an inherent source bias. This work provides a foundation for understanding and addressing potential biases in term-based IR systems managing mixed-source content.
Related papers
- Training-Induced Bias Toward LLM-Generated Content in Dense Retrieval [6.771568584669793]
Reports claim a broad preference for text generated by large language models (LLMs)<n>In this study, we trace the emergence of such preferences across training stages and data sources.<n>Our study demonstrates that source bias is a training-induced phenomenon rather than an inherent property of dense retrievers.
arXiv Detail & Related papers (2026-02-11T13:20:25Z) - Low-Perplexity LLM-Generated Sequences and Where To Find Them [0.0]
We introduce a systematic approach centered on analyzing low-perplexity sequences - high-probability text spans generated by the model.<n>Our pipeline reliably extracts such long sequences across diverse topics while avoiding degeneration, then traces them back to their sources in the training data.<n>For those that do match, we quantify the distribution of occurrences across source documents, highlighting the scope and nature of verbatim recall.
arXiv Detail & Related papers (2025-07-02T15:58:51Z) - Unleashing the Power of LLMs in Dense Retrieval with Query Likelihood Modeling [69.84963245729826]
We propose an auxiliary task of QL to enhance the backbone for subsequent contrastive learning of the retriever.<n>We introduce our model, which incorporates two key components: Attention Block (AB) and Document Corruption (DC)
arXiv Detail & Related papers (2025-04-07T16:03:59Z) - A Bayesian Approach to Harnessing the Power of LLMs in Authorship Attribution [57.309390098903]
Authorship attribution aims to identify the origin or author of a document.
Large Language Models (LLMs) with their deep reasoning capabilities and ability to maintain long-range textual associations offer a promising alternative.
Our results on the IMDb and blog datasets show an impressive 85% accuracy in one-shot authorship classification across ten authors.
arXiv Detail & Related papers (2024-10-29T04:14:23Z) - Beyond Binary: Towards Fine-Grained LLM-Generated Text Detection via Role Recognition and Involvement Measurement [51.601916604301685]
Large language models (LLMs) generate content that can undermine trust in online discourse.<n>Current methods often focus on binary classification, failing to address the complexities of real-world scenarios like human-LLM collaboration.<n>To move beyond binary classification and address these challenges, we propose a new paradigm for detecting LLM-generated content.
arXiv Detail & Related papers (2024-10-18T08:14:10Z) - Evaluation of Attribution Bias in Generator-Aware Retrieval-Augmented Large Language Models [47.694137341509304]
We evaluate the attribution sensitivity and bias with respect to authorship information in large language models.<n>Our results show that adding authorship information to source documents can significantly change the attribution quality of LLMs by 3% to 18%.<n>Our findings indicate that metadata of source documents can influence LLMs' trust, and how they attribute their answers.
arXiv Detail & Related papers (2024-10-16T08:55:49Z) - ReMoDetect: Reward Models Recognize Aligned LLM's Generations [55.06804460642062]
Large language models (LLMs) generate human-preferable texts.
In this paper, we identify the common characteristics shared by these models.
We propose two training schemes to further improve the detection ability of the reward model.
arXiv Detail & Related papers (2024-05-27T17:38:33Z) - Neural Retrievers are Biased Towards LLM-Generated Content [35.40318940303482]
Large language models (LLMs) have revolutionized the paradigm of information retrieval (IR) applications.
How these LLM-generated documents influence the IR systems is a pressing and still unexplored question.
Surprisingly, our findings indicate that neural retrieval models tend to rank LLM-generated documents higher.
arXiv Detail & Related papers (2023-10-31T14:42:23Z) - Large Language Models can Contrastively Refine their Generation for Better Sentence Representation Learning [57.74233319453229]
Large language models (LLMs) have emerged as a groundbreaking technology and their unparalleled text generation capabilities have sparked interest in their application to the fundamental sentence representation learning task.
We propose MultiCSR, a multi-level contrastive sentence representation learning framework that decomposes the process of prompting LLMs to generate a corpus.
Our experiments reveal that MultiCSR enables a less advanced LLM to surpass the performance of ChatGPT, while applying it to ChatGPT achieves better state-of-the-art results.
arXiv Detail & Related papers (2023-10-17T03:21:43Z) - Enabling Large Language Models to Generate Text with Citations [37.64884969997378]
Large language models (LLMs) have emerged as a widely-used tool for information seeking.
Our aim is to allow LLMs to generate text with citations, improving their factual correctness and verifiability.
We propose ALCE, the first benchmark for Automatic LLMs' Citation Evaluation.
arXiv Detail & Related papers (2023-05-24T01:53:49Z) - Synergistic Interplay between Search and Large Language Models for
Information Retrieval [141.18083677333848]
InteR allows RMs to expand knowledge in queries using LLM-generated knowledge collections.
InteR achieves overall superior zero-shot retrieval performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-12T11:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.