Know Where to Go: Make LLM a Relevant, Responsible, and Trustworthy
Searcher
- URL: http://arxiv.org/abs/2310.12443v1
- Date: Thu, 19 Oct 2023 03:49:36 GMT
- Title: Know Where to Go: Make LLM a Relevant, Responsible, and Trustworthy
Searcher
- Authors: Xiang Shi, Jiawei Liu, Yinpeng Liu, Qikai Cheng, Wei Lu
- Abstract summary: Large Language Models (LLMs) have shown the potential to improve relevance and provide direct answers in web searches.
challenges arise in the reliability of generated results and the credibility of contributing sources.
We propose a novel generative retrieval framework leveraging the knowledge of LLMs to foster a direct link between queries and online sources.
- Score: 10.053004550486214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The advent of Large Language Models (LLMs) has shown the potential to improve
relevance and provide direct answers in web searches. However, challenges arise
in validating the reliability of generated results and the credibility of
contributing sources, due to the limitations of traditional information
retrieval algorithms and the LLM hallucination problem. Aiming to create a
"PageRank" for the LLM era, we strive to transform LLM into a relevant,
responsible, and trustworthy searcher. We propose a novel generative retrieval
framework leveraging the knowledge of LLMs to foster a direct link between
queries and online sources. This framework consists of three core modules:
Generator, Validator, and Optimizer, each focusing on generating trustworthy
online sources, verifying source reliability, and refining unreliable sources,
respectively. Extensive experiments and evaluations highlight our method's
superior relevance, responsibility, and trustfulness against various SOTA
methods.
Related papers
- Leveraging LLM Parametric Knowledge for Fact Checking without Retrieval [60.25608870901428]
Trustworthiness is a core research challenge for agentic AI systems built on Large Language Models (LLMs)<n>We propose the task of fact-checking without retrieval, focusing on the verification of arbitrary natural language claims, independent of their source robustness.
arXiv Detail & Related papers (2026-03-05T18:42:51Z) - Agentic Multi-Persona Framework for Evidence-Aware Fake News Detection [0.7534418099163723]
AMPEND-LS is an agentic multi-persona evidence-grounded framework for multimodal fake news detection.<n>It integrates textual, visual, and contextual signals through a structured reasoning pipeline powered by LLMs.<n>Experiments show that AMPEND-LS consistently outperformed state-of-the-art baselines in accuracy, F1 score, and robustness.<n>This work advances the development of adaptive, explainable, and evidence-aware systems for safeguarding online information integrity.
arXiv Detail & Related papers (2025-12-24T08:06:52Z) - Enhancing Factual Accuracy and Citation Generation in LLMs via Multi-Stage Self-Verification [41.99844472131922]
This research introduces VeriFact-CoT, a novel method designed to address the pervasive issues of hallucination and the absence of credible citation sources in Large Language Models (LLMs)<n>By incorporating a multi-stage mechanism of 'fact verification-reflection-citation integration,' VeriFact-CoT empowers LLMs to critically self-examine and revise their intermediate reasoning steps and final answers.
arXiv Detail & Related papers (2025-09-06T15:07:59Z) - A Trustworthy Multi-LLM Network: Challenges,Solutions, and A Use Case [59.58213261128626]
We propose a blockchain-enabled collaborative framework that connects multiple Large Language Models (LLMs) into a Trustworthy Multi-LLM Network (MultiLLMN)<n>This architecture enables the cooperative evaluation and selection of the most reliable and high-quality responses to complex network optimization problems.
arXiv Detail & Related papers (2025-05-06T05:32:46Z) - Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Reliable Response Generation in the Wild [11.058848731627233]
Large language models (LLMs) have advanced information retrieval systems.
LLMs often face knowledge conflicts between internal memory and retrievaled external information.
We propose Swin-VIB, a novel framework that integrates a pipeline of variational information bottleneck models into adaptive augmentation of retrieved information.
arXiv Detail & Related papers (2025-04-17T14:40:31Z) - Optimizing Knowledge Integration in Retrieval-Augmented Generation with Self-Selection [72.92366526004464]
Retrieval-Augmented Generation (RAG) has proven effective in enabling Large Language Models (LLMs) to produce more accurate and reliable responses.
We propose a novel Self-Selection RAG framework, where the LLM is made to select from pairwise responses generated with internal parametric knowledge solely.
arXiv Detail & Related papers (2025-02-10T04:29:36Z) - A MapReduce Approach to Effectively Utilize Long Context Information in Retrieval Augmented Language Models [24.509988895204472]
Large language models (LLMs) struggle to produce up-to-date responses on evolving topics due to outdated knowledge or hallucination.
Retrieval-augmented generation (RAG) is a pivotal innovation that improves the accuracy and relevance of LLM responses.
We propose a map-reduce strategy, BriefContext, to combat the "lost-in-the-middle" issue without modifying the model weights.
arXiv Detail & Related papers (2024-12-17T11:18:14Z) - Beyond Binary: Towards Fine-Grained LLM-Generated Text Detection via Role Recognition and Involvement Measurement [51.601916604301685]
Large language models (LLMs) generate content that can undermine trust in online discourse.
Current methods often focus on binary classification, failing to address the complexities of real-world scenarios like human-AI collaboration.
To move beyond binary classification and address these challenges, we propose a new paradigm for detecting LLM-generated content.
arXiv Detail & Related papers (2024-10-18T08:14:10Z) - TRACE: TRansformer-based Attribution using Contrastive Embeddings in LLMs [50.259001311894295]
We propose a novel TRansformer-based Attribution framework using Contrastive Embeddings called TRACE.
We show that TRACE significantly improves the ability to attribute sources accurately, making it a valuable tool for enhancing the reliability and trustworthiness of large language models.
arXiv Detail & Related papers (2024-07-06T07:19:30Z) - SPOT: Text Source Prediction from Originality Score Thresholding [6.790905400046194]
countermeasures aim at detecting misinformation, usually involve domain specific models trained to recognize the relevance of any information.
Instead of evaluating the validity of the information, we propose to investigate LLM generated text from the perspective of trust.
arXiv Detail & Related papers (2024-05-30T21:51:01Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - ReSLLM: Large Language Models are Strong Resource Selectors for
Federated Search [35.44746116088232]
Federated search will become increasingly pivotal in the context of Retrieval-Augmented Generation pipelines.
Current SOTA resource selection methodologies rely on feature-based learning approaches.
We propose ReSLLM to drive the selection of resources in federated search in a zero-shot setting.
arXiv Detail & Related papers (2024-01-31T07:58:54Z) - TrustLLM: Trustworthiness in Large Language Models [446.5640421311468]
This paper introduces TrustLLM, a comprehensive study of trustworthiness in large language models (LLMs)
We first propose a set of principles for trustworthy LLMs that span eight different dimensions.
Based on these principles, we establish a benchmark across six dimensions including truthfulness, safety, fairness, robustness, privacy, and machine ethics.
arXiv Detail & Related papers (2024-01-10T22:07:21Z) - Assessing the Reliability of Large Language Model Knowledge [78.38870272050106]
Large language models (LLMs) have been treated as knowledge bases due to their strong performance in knowledge probing tasks.
How do we evaluate the capabilities of LLMs to consistently produce factually correct answers?
We propose MOdel kNowledge relIabiliTy scORe (MONITOR), a novel metric designed to directly measure LLMs' factual reliability.
arXiv Detail & Related papers (2023-10-15T12:40:30Z) - Survey on Factuality in Large Language Models: Knowledge, Retrieval and
Domain-Specificity [61.54815512469125]
This survey addresses the crucial issue of factuality in Large Language Models (LLMs)
As LLMs find applications across diverse domains, the reliability and accuracy of their outputs become vital.
arXiv Detail & Related papers (2023-10-11T14:18:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.