FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn
Correction in the Loop
- URL: http://arxiv.org/abs/2110.10144v1
- Date: Sun, 12 Sep 2021 13:38:24 GMT
- Title: FaxPlainAC: A Fact-Checking Tool Based on EXPLAINable Models with HumAn
Correction in the Loop
- Authors: Zijian Zhang, Koustav Rudra, Avishek Anand
- Abstract summary: FaxPlainAC is a tool that gathers user feedback on the output of explainable fact-checking models.
FaxPlainAC can be integrated with other downstream tasks and allows for fact-checking human annotation gathering and life-long learning.
- Score: 8.185643427164447
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Fact-checking on the Web has become the main mechanism through which we
detect the credibility of the news or information. Existing fact-checkers
verify the authenticity of the information (support or refute the claim) based
on secondary sources of information. However, existing approaches do not
consider the problem of model updates due to constantly increasing training
data due to user feedback. It is therefore important to conduct user studies to
correct models' inference biases and improve the model in a life-long learning
manner in the future according to the user feedback. In this paper, we present
FaxPlainAC, a tool that gathers user feedback on the output of explainable
fact-checking models. FaxPlainAC outputs both the model decision, i.e., whether
the input fact is true or not, along with the supporting/refuting evidence
considered by the model. Additionally, FaxPlainAC allows for accepting user
feedback both on the prediction and explanation. Developed in Python,
FaxPlainAC is designed as a modular and easily deployable tool. It can be
integrated with other downstream tasks and allowing for fact-checking human
annotation gathering and life-long learning.
Related papers
- Interactive Reasoning: Visualizing and Controlling Chain-of-Thought Reasoning in Large Language Models [54.85405423240165]
We introduce Interactive Reasoning, an interaction design that visualizes chain-of-thought outputs as a hierarchy of topics.<n>We implement interactive reasoning in Hippo, a prototype for AI-assisted decision making in the face of uncertain trade-offs.
arXiv Detail & Related papers (2025-06-30T10:00:43Z) - Unidentified and Confounded? Understanding Two-Tower Models for Unbiased Learning to Rank [50.9530591265324]
Training two-tower models on clicks collected by well-performing production systems leads to decreased ranking performance.<n>We theoretically analyze the identifiability conditions of two-tower models, showing that either document swaps across positions or overlapping feature distributions are required to recover model parameters from clicks.<n>We also investigate the effect of logging policies on two-tower models, finding that they introduce no bias when models perfectly capture user behavior.
arXiv Detail & Related papers (2025-06-25T14:47:43Z) - What Matters in Explanations: Towards Explainable Fake Review Detection Focusing on Transformers [45.55363754551388]
Customers' reviews and feedback play crucial role on e-commerce platforms like Amazon, Zalando, and eBay.
There is a prevailing concern that sellers often post fake or spam reviews to deceive potential customers and manipulate their opinions about a product.
We propose an explainable framework for detecting fake reviews with high precision in identifying fraudulent content with explanations.
arXiv Detail & Related papers (2024-07-24T13:26:02Z) - FactLLaMA: Optimizing Instruction-Following Language Models with
External Knowledge for Automated Fact-Checking [10.046323978189847]
We propose combining the power of instruction-following language models with external evidence retrieval to enhance fact-checking performance.
Our approach involves leveraging search engines to retrieve relevant evidence for a given input claim.
Then, we instruct-tune an open-sourced language model, called LLaMA, using this evidence, enabling it to predict the veracity of the input claim more accurately.
arXiv Detail & Related papers (2023-09-01T04:14:39Z) - Evaluating Correctness and Faithfulness of Instruction-Following Models for Question Answering [26.34649731975005]
Retriever-augmented instruction-following models are attractive alternatives to fine-tuned approaches for question answering (QA)
While the model responses tend to be natural and fluent, the additional verbosity makes traditional QA evaluation metrics unreliable for accurately quantifying model performance.
We use both automatic and human evaluation to evaluate these models along two dimensions: 1) how well they satisfy the user's information need (correctness) and 2) whether they produce a response based on the provided knowledge (faithfulness)
arXiv Detail & Related papers (2023-07-31T17:41:00Z) - Counterfactual Augmentation for Multimodal Learning Under Presentation
Bias [48.372326930638025]
In machine learning systems, feedback loops between users and models can bias future user behavior, inducing a presentation bias in labels.
We propose counterfactual augmentation, a novel causal method for correcting presentation bias using generated counterfactual labels.
arXiv Detail & Related papers (2023-05-23T14:09:47Z) - Preserving Knowledge Invariance: Rethinking Robustness Evaluation of
Open Information Extraction [50.62245481416744]
We present the first benchmark that simulates the evaluation of open information extraction models in the real world.
We design and annotate a large-scale testbed in which each example is a knowledge-invariant clique.
By further elaborating the robustness metric, a model is judged to be robust if its performance is consistently accurate on the overall cliques.
arXiv Detail & Related papers (2023-05-23T12:05:09Z) - XMD: An End-to-End Framework for Interactive Explanation-Based Debugging
of NLP Models [33.81019305179569]
Explanation-based model debug aims to resolve spurious biases by showing human users explanations of model behavior.
We propose XMD: the first open-source, end-to-end framework for explanation-based model debug.
XMD automatically updates the model in real time, by regularizing the model so that its explanations align with the user feedback.
arXiv Detail & Related papers (2022-10-30T23:09:09Z) - VisFIS: Visual Feature Importance Supervision with
Right-for-the-Right-Reason Objectives [84.48039784446166]
We show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason metrics.
Our best performing method, Visual Feature Importance Supervision (VisFIS), outperforms strong baselines on benchmark VQA datasets.
Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful.
arXiv Detail & Related papers (2022-06-22T17:02:01Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - WikiCheck: An end-to-end open source Automatic Fact-Checking API based
on Wikipedia [1.14219428942199]
We review the State-of-the-Art datasets and solutions for Automatic Fact-checking.
We propose a data filtering method that improves the model's performance and generalization.
We present a new fact-checking system, the textitWikiCheck API that automatically performs a facts validation process based on the Wikipedia knowledge base.
arXiv Detail & Related papers (2021-09-02T10:45:07Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.