WebTrust: An AI-Driven Data Scoring System for Reliable Information Retrieval
- URL: http://arxiv.org/abs/2506.12072v1
- Date: Thu, 05 Jun 2025 01:48:09 GMT
- Title: WebTrust: An AI-Driven Data Scoring System for Reliable Information Retrieval
- Authors: Joydeep Chandra, Aleksandr Algazinov, Satyam Kumar Navneet, Rim El Filali, Matt Laing, Andrew Hanna,
- Abstract summary: We introduce WebTrust, a system designed to simplify the process of finding and judging credible information online.<n>WebTrust works by assigning a reliability score (from 0.1 to 1) to each statement it processes.<n>It offers a clear justification for why a piece of information received that score.
- Score: 37.369136261423094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As access to information becomes more open and widespread, people are increasingly using AI tools for assistance. However, many of these tools struggle to estimate the trustworthiness of the information. Although today's search engines include AI features, they often fail to offer clear indicators of data reliability. To address this gap, we introduce WebTrust, a system designed to simplify the process of finding and judging credible information online. Built on a fine-tuned version of IBM's Granite-1B model and trained on a custom dataset, WebTrust works by assigning a reliability score (from 0.1 to 1) to each statement it processes. In addition, it offers a clear justification for why a piece of information received that score. Evaluated using prompt engineering, WebTrust consistently achieves superior performance compared to other small-scale LLMs and rule-based approaches, outperforming them across all experiments on MAE, RMSE, and R2. User testing showed that when reliability scores are displayed alongside search results, people feel more confident and satisfied with the information they find. With its accuracy, transparency, and ease of use, WebTrust offers a practical solution to help combat misinformation and make trustworthy information more accessible to everyone.
Related papers
- Toward Verifiable Misinformation Detection: A Multi-Tool LLM Agent Framework [0.5999777817331317]
This research proposes an innovative verifiable misinformation detection LLM agent.<n>The agent actively verifies claims through dynamic interaction with diverse web sources.<n>It assesses information source credibility, synthesizes evidence, and provides a complete verifiable reasoning process.
arXiv Detail & Related papers (2025-08-05T05:15:03Z) - Bridging the Data Gap in AI Reliability Research and Establishing DR-AIR, a Comprehensive Data Repository for AI Reliability [4.769924694900377]
A major challenge in AI reliability research, particularly for those in academia, is the lack of readily available AI reliability data.<n>This paper conducts a comprehensive review of available AI reliability data and establishes DR-AIR: a data repository for AI reliability data.
arXiv Detail & Related papers (2025-02-17T23:50:36Z) - Privacy-Preserving Verifiable Neural Network Inference Service [4.131956503199438]
We develop a privacy-preserving and verifiable CNN inference scheme that preserves privacy for client data samples.
vPIN achieves high efficiency in terms of proof size, while providing client data privacy guarantees and provable verifiability.
arXiv Detail & Related papers (2024-11-12T01:09:52Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Fostering Trust and Quantifying Value of AI and ML [0.0]
Much has been discussed about trusting AI and ML inferences, but little has been done to define what that means.
producing ever more trustworthy machine learning inferences is a path to increase the value of products.
arXiv Detail & Related papers (2024-07-08T13:25:28Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - KGTrust: Evaluating Trustworthiness of SIoT via Knowledge Enhanced Graph
Neural Networks [63.531790269009704]
Social Internet of Things (SIoT) is a promising and emerging paradigm that injects the notion of social networking into smart objects (i.e., things)
Due to the risks and uncertainty, a crucial and urgent problem to be settled is establishing reliable relationships within SIoT, that is, trust evaluation.
We propose a novel knowledge-enhanced graph neural network (KGTrust) for better trust evaluation in SIoT.
arXiv Detail & Related papers (2023-02-22T14:24:45Z) - HYCEDIS: HYbrid Confidence Engine for Deep Document Intelligence System [16.542137414609602]
We propose a complete and novel architecture to measure confidence of current deep learning models in document information extraction task.
Our architecture consists of a Multi-modal Conformal Predictor and a Variational Cluster-oriented Anomaly Detector.
We evaluate our architecture on real-wold datasets, not only outperforming competing confidence estimators by a huge margin but also demonstrating generalization ability to out-of-distribution data.
arXiv Detail & Related papers (2022-06-01T09:57:34Z) - VELVET: a noVel Ensemble Learning approach to automatically locate
VulnErable sTatements [62.93814803258067]
This paper presents VELVET, a novel ensemble learning approach to locate vulnerable statements in source code.
Our model combines graph-based and sequence-based neural networks to successfully capture the local and global context of a program graph.
VELVET achieves 99.6% and 43.6% top-1 accuracy over synthetic data and real-world data, respectively.
arXiv Detail & Related papers (2021-12-20T22:45:27Z) - Personalized multi-faceted trust modeling to determine trust links in
social media and its potential for misinformation management [61.88858330222619]
We present an approach for predicting trust links between peers in social media.
We propose a data-driven multi-faceted trust modeling which incorporates many distinct features for a comprehensive analysis.
Illustrated in a trust-aware item recommendation task, we evaluate the proposed framework in the context of a large Yelp dataset.
arXiv Detail & Related papers (2021-11-11T19:40:51Z) - FacTeR-Check: Semi-automated fact-checking through Semantic Similarity
and Natural Language Inference [61.068947982746224]
FacTeR-Check enables retrieving fact-checked information, unchecked claims verification and tracking dangerous information over social media.
The architecture is validated using a new dataset called NLI19-SP that is publicly released with COVID-19 related hoaxes and tweets from Spanish social media.
Our results show state-of-the-art performance on the individual benchmarks, as well as producing useful analysis of the evolution over time of 61 different hoaxes.
arXiv Detail & Related papers (2021-10-27T15:44:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.