Improving Zero-shot LLM Re-Ranker with Risk Minimization
- URL: http://arxiv.org/abs/2406.13331v1
- Date: Wed, 19 Jun 2024 08:29:54 GMT
- Title: Improving Zero-shot LLM Re-Ranker with Risk Minimization
- Authors: Xiaowei Yuan, Zhao Yang, Yequan Wang, Jun Zhao, Kang Liu,
- Abstract summary: In the Retrieval-Augmented Generation (RAG) system, advanced Large Language Models (LLMs) have emerged as effective Query Likelihood Models (QLMs) in an unsupervised way.
However, directly prompting LLMs to approximate QLMs inherently is biased, where the estimated distribution might diverge from the actual document-specific distribution.
We introduce a novel framework, $mathrmUR3$, which leverages Bayesian decision theory to both quantify and mitigate this estimation bias.
- Score: 20.32406191251512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the Retrieval-Augmented Generation (RAG) system, advanced Large Language Models (LLMs) have emerged as effective Query Likelihood Models (QLMs) in an unsupervised way, which re-rank documents based on the probability of generating the query given the content of a document. However, directly prompting LLMs to approximate QLMs inherently is biased, where the estimated distribution might diverge from the actual document-specific distribution. In this study, we introduce a novel framework, $\mathrm{UR^3}$, which leverages Bayesian decision theory to both quantify and mitigate this estimation bias. Specifically, $\mathrm{UR^3}$ reformulates the problem as maximizing the probability of document generation, thereby harmonizing the optimization of query and document generation probabilities under a unified risk minimization objective. Our empirical results indicate that $\mathrm{UR^3}$ significantly enhances re-ranking, particularly in improving the Top-1 accuracy. It benefits the QA tasks by achieving higher accuracy with fewer input documents.
Related papers
- ProRank: Prompt Warmup via Reinforcement Learning for Small Language Models Reranking [8.244386008877441]
We introduce a novel two-stage training approach, ProRank, for SLM-based document reranking.<n>First, we propose a prompt warmup stage using reinforcement learning GRPO to steer SLMs to understand task prompts.<n>Then, we continuously fine-tune the SLMs with a fine-grained score learning stage without introducing additional layers to further improve the reranking quality.
arXiv Detail & Related papers (2025-06-04T02:00:44Z) - Optimal Policy Minimum Bayesian Risk [25.434911234706952]
We present a novel method for incorporating reward and risk/similarity signals into Bayes risk decoding (MBRD)<n>Based on the concept of optimal policy in KL-controlled reinforcement learning, our framework provides a simple and well-defined mechanism for leveraging such signals.<n>It offers several advantages over traditional inference-time methods: higher robustness, improved accuracy, and well-understood behavior.
arXiv Detail & Related papers (2025-05-22T19:43:37Z) - Likelihood-Ratio Regularized Quantile Regression: Adapting Conformal Prediction to High-Dimensional Covariate Shifts [35.16750653336608]
We introduce the likelihood ratio regularized quantile regression algorithm, which combines the pinball loss with a novel choice of regularization.
We show that the LR-QR method has coverage at the desired level in the target domain, up to a small error term.
Our experiments demonstrate that the LR-QR algorithm outperforms existing methods on high-dimensional prediction tasks.
arXiv Detail & Related papers (2025-02-18T16:46:44Z) - Inference Scaling for Bridging Retrieval and Augmented Generation [47.091086803980765]
Retrieval-augmented generation (RAG) has emerged as a popular approach to steering the output of a large language model (LLM)
We show such bias can be mitigated, from inference scaling, aggregating inference calls from the permuted order of retrieved contexts.
We showcase the effectiveness of MOI on diverse RAG tasks, improving ROUGE-L on MS MARCO and EM on HotpotQA benchmarks by 7 points.
arXiv Detail & Related papers (2024-12-14T05:06:43Z) - Calibrating Verbalized Probabilities for Large Language Models [7.586709509332433]
Calibrating verbalized probabilities presents a novel approach for reliably assessing and leveraging outputs from black-box Large Language Models.
Recent methods have demonstrated improved calibration by applying techniques like Platt scaling or temperature scaling to the confidence scores generated by LLMs.
arXiv Detail & Related papers (2024-10-09T09:20:24Z) - A Probabilistic Perspective on Unlearning and Alignment for Large Language Models [48.96686419141881]
We introduce the first formal probabilistic evaluation framework for Large Language Models (LLMs)
Namely, we propose novel metrics with high probability guarantees concerning the output distribution of a model.
Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment.
arXiv Detail & Related papers (2024-10-04T15:44:23Z) - Cycles of Thought: Measuring LLM Confidence through Stable Explanations [53.15438489398938]
Large language models (LLMs) can reach and even surpass human-level accuracy on a variety of benchmarks, but their overconfidence in incorrect responses is still a well-documented failure mode.
We propose a framework for measuring an LLM's uncertainty with respect to the distribution of generated explanations for an answer.
arXiv Detail & Related papers (2024-06-05T16:35:30Z) - BIRD: A Trustworthy Bayesian Inference Framework for Large Language Models [52.46248487458641]
Predictive models often need to work with incomplete information in real-world tasks.
Current large language models (LLM) are insufficient for such accurate estimations.
We propose BIRD, a novel probabilistic inference framework.
arXiv Detail & Related papers (2024-04-18T20:17:23Z) - Maximum Likelihood Estimation is All You Need for Well-Specified
Covariate Shift [34.414261291690856]
Key challenge of modern machine learning systems is to achieve Out-of-Distribution (OOD) generalization.
We show that classical Maximum Likelihood Estimation (MLE) purely using source data achieves the minimax optimality.
We illustrate the wide applicability of our framework by instantiating it to three concrete examples.
arXiv Detail & Related papers (2023-11-27T16:06:48Z) - Open-source Large Language Models are Strong Zero-shot Query Likelihood
Models for Document Ranking [36.90911173089409]
Large language models (LLMs) have emerged as effective Query Likelihood Models (QLMs)
This paper focuses on investigating the genuine zero-shot ranking effectiveness of recent LLMs.
We introduce a novel state-of-the-art ranking system that integrates LLM-based QLMs with a hybrid zero-shot retriever.
arXiv Detail & Related papers (2023-10-20T02:54:42Z) - Minimum-Risk Recalibration of Classifiers [9.31067660373791]
We introduce the concept of minimum-risk recalibration within the framework of mean-squared-error decomposition.
We show that transferring a calibrated classifier requires significantly fewer target samples compared to recalibrating from scratch.
arXiv Detail & Related papers (2023-05-18T11:27:02Z) - Probable Domain Generalization via Quantile Risk Minimization [90.15831047587302]
Domain generalization seeks predictors which perform well on unseen test distributions.
We propose a new probabilistic framework for DG where the goal is to learn predictors that perform well with high probability.
arXiv Detail & Related papers (2022-07-20T14:41:09Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - A Multi-Document Coverage Reward for RELAXed Multi-Document
Summarization [11.02198476454955]
We propose fine-tuning an MDS baseline with a reward that balances a reference-based metric with coverage of the input documents.
Experimental results over the Multi-News and WCEP MDS datasets show significant improvements of up to +0.95 pp average ROUGE score and +3.17 pp METEOR score over the baseline.
arXiv Detail & Related papers (2022-03-06T07:33:01Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Distributionally Robust Bayesian Quadrature Optimization [60.383252534861136]
We study BQO under distributional uncertainty in which the underlying probability distribution is unknown except for a limited set of its i.i.d. samples.
A standard BQO approach maximizes the Monte Carlo estimate of the true expected objective given the fixed sample set.
We propose a novel posterior sampling based algorithm, namely distributionally robust BQO (DRBQO) for this purpose.
arXiv Detail & Related papers (2020-01-19T12:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.