Factuality and Transparency Are All RAG Needs! Self-Explaining Contrastive Evidence Re-ranking
- URL: http://arxiv.org/abs/2512.05012v1
- Date: Thu, 04 Dec 2025 17:24:35 GMT
- Title: Factuality and Transparency Are All RAG Needs! Self-Explaining Contrastive Evidence Re-ranking
- Authors: Francielle Vargas, Daniel Pedronette,
- Abstract summary: This extended abstract introduces Self-Explaining Contrastive Evidence Re-Ranking (CER)<n>CER restructures retrieval around factual evidence by fine-tuning embeddings with contrastive learning and generating token-level attribution rationales for each retrieved passage.<n>We evaluated our method on clinical trial reports, and initial experimental results show that CER improves retrieval accuracy, mitigates the potential for hallucinations in RAG systems, and provides transparent, evidence-based retrieval that enhances reliability, especially in safety-critical domains.
- Score: 0.2864713389096699
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This extended abstract introduces Self-Explaining Contrastive Evidence Re-Ranking (CER), a novel method that restructures retrieval around factual evidence by fine-tuning embeddings with contrastive learning and generating token-level attribution rationales for each retrieved passage. Hard negatives are automatically selected using a subjectivity-based criterion, forcing the model to pull factual rationales closer while pushing subjective or misleading explanations apart. As a result, the method creates an embedding space explicitly aligned with evidential reasoning. We evaluated our method on clinical trial reports, and initial experimental results show that CER improves retrieval accuracy, mitigates the potential for hallucinations in RAG systems, and provides transparent, evidence-based retrieval that enhances reliability, especially in safety-critical domains.
Related papers
- ExDR: Explanation-driven Dynamic Retrieval Enhancement for Multimodal Fake News Detection [23.87220484843729]
multimodal fake news poses a serious societal threat.<n> Dynamic Retrieval-Augmented Generation provides a promising solution by triggering keyword-based retrieval.<n>We propose ExDR, an Explanation-driven Dynamic Retrieval-Augmented Generation framework for Multimodal Fake News Detection.
arXiv Detail & Related papers (2026-01-22T10:10:06Z) - REFLEX: Self-Refining Explainable Fact-Checking via Disentangling Truth into Style and Substance [14.932352020762991]
We propose REason-guided Fact-checking with Latent EXplanations REFLEX paradigm.<n>It is a plug-and-play, self-refining paradigm that leverages the internal knowledge in backbone model to improve both verdict accuracy and explanation quality.<n>With only 465 self-refined training samples, RELFEX achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-11-25T12:06:23Z) - Look As You Think: Unifying Reasoning and Visual Evidence Attribution for Verifiable Document RAG via Reinforcement Learning [55.232400251303794]
Look As You Think (LAT) is a reinforcement learning framework that trains models to produce verifiable reasoning paths with consistent attribution.<n>LAT consistently improves the vanilla model in both single- and multi-image settings, yielding average gains of 8.23% in soft exact match (EM) and 47.0% in IoU@0.5.
arXiv Detail & Related papers (2025-11-15T02:50:23Z) - Abductive Inference in Retrieval-Augmented Language Models: Generating and Validating Missing Premises [0.0]
We propose a framework that integrates abductive inference into retrieval-augmented LLMs.<n> Experimental results on abductive reasoning and multi-hop QA benchmarks show that our approach improves both answer accuracy and reasoning faithfulness.<n>This work highlights abductive inference as a promising direction for enhancing the robustness and explainability of RAG systems.
arXiv Detail & Related papers (2025-11-06T03:37:24Z) - Learning to Extract Rational Evidence via Reinforcement Learning for Retrieval-Augmented Generation [37.47571308389908]
Retrieval-Augmented Generation (RAG) effectively improves the accuracy of Large Language Models (LLMs)<n>Previous methods extract evidence straightforwardly without explicit thinking, which risks filtering out key clues and struggles with generalization.<n>We propose Evi Omni, which learns to extract rational evidence by (1) explicitly reasoning to identify potential cues within retrieval contents first, and then (2) consciously extracting to avoid omitting any key cues helpful for answering questions.
arXiv Detail & Related papers (2025-07-21T13:03:55Z) - Retrieval is Not Enough: Enhancing RAG Reasoning through Test-Time Critique and Optimization [58.390885294401066]
Retrieval-augmented generation (RAG) has become a widely adopted paradigm for enabling knowledge-grounded large language models (LLMs)<n>RAG pipelines often fail to ensure that model reasoning remains consistent with the evidence retrieved, leading to factual inconsistencies or unsupported conclusions.<n>We propose AlignRAG, a novel iterative framework grounded in Critique-Driven Alignment (CDA)<n>We introduce AlignRAG-auto, an autonomous variant that dynamically terminates refinement, removing the need to pre-specify the number of critique iterations.
arXiv Detail & Related papers (2025-04-21T04:56:47Z) - TrustLoRA: Low-Rank Adaptation for Failure Detection under Out-of-distribution Data [62.22804234013273]
We propose a simple failure detection framework to unify and facilitate classification with rejection under both covariate and semantic shifts.<n>Our key insight is that by separating and consolidating failure-specific reliability knowledge with low-rank adapters, we can enhance the failure detection ability effectively and flexibly.
arXiv Detail & Related papers (2025-04-20T09:20:55Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - AMRFact: Enhancing Summarization Factuality Evaluation with AMR-Driven Negative Samples Generation [57.8363998797433]
We propose AMRFact, a framework that generates perturbed summaries using Abstract Meaning Representations (AMRs)
Our approach parses factually consistent summaries into AMR graphs and injects controlled factual inconsistencies to create negative examples, allowing for coherent factually inconsistent summaries to be generated with high error-type coverage.
arXiv Detail & Related papers (2023-11-16T02:56:29Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine
Reading Comprehension [39.970232108247394]
Reasoning machine reading comprehension (R-MRC) aims to answer complex questions that require discrete reasoning based on text.
Previous end-to-end methods that achieve state-of-the-art performance rarely solve the problem by paying enough emphasis on the modeling of evidence.
We propose an evidence-emphasized discrete reasoning approach (EviDR), in which sentence and clause level evidence is first detected based on distant supervision.
arXiv Detail & Related papers (2021-08-18T06:49:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.