Explaining Veracity Predictions with Evidence Summarization: A
Multi-Task Model Approach
- URL: http://arxiv.org/abs/2402.06443v1
- Date: Fri, 9 Feb 2024 14:39:20 GMT
- Title: Explaining Veracity Predictions with Evidence Summarization: A
Multi-Task Model Approach
- Authors: Recep Firat Cekinel and Pinar Karagoz
- Abstract summary: We propose a multi-task explainable neural model for misinformation detection.
Specifically, this work formulates an explanation generation process of the model's veracity prediction as a text summarization problem.
The performance of the proposed model is discussed on publicly available datasets and the findings are evaluated with related studies.
- Score: 1.223779595809275
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid dissemination of misinformation through social media increased the
importance of automated fact-checking. Furthermore, studies on what deep neural
models pay attention to when making predictions have increased in recent years.
While significant progress has been made in this field, it has not yet reached
a level of reasoning comparable to human reasoning. To address these gaps, we
propose a multi-task explainable neural model for misinformation detection.
Specifically, this work formulates an explanation generation process of the
model's veracity prediction as a text summarization problem. Additionally, the
performance of the proposed model is discussed on publicly available datasets
and the findings are evaluated with related studies.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Hate Speech Classification with Model Agnostic Methods [0.9990687944474738]
The research goal of this paper is to bridge the gap between hate speech prediction and the explanations generated by the system to support its decision.
This has been achieved by first predicting the classification of a text and then providing a posthoc, model agnostic and surrogate interpretability approach.
arXiv Detail & Related papers (2023-05-30T19:52:56Z) - Causal Analysis for Robust Interpretability of Neural Networks [0.2519906683279152]
We develop a robust interventional-based method to capture cause-effect mechanisms in pre-trained neural networks.
We apply our method to vision models trained on classification tasks.
arXiv Detail & Related papers (2023-05-15T18:37:24Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - You Can Do Better! If You Elaborate the Reason When Making Prediction [13.658942796267015]
This paper proposes a novel neural predictive framework coupled with large pre-trained language models to make a prediction and generate its corresponding explanation simultaneously.
We conducted a preliminary empirical study on Chinese medical multiple-choice question answering, English natural language inference and commonsense question answering tasks.
The proposed method also achieves improved prediction accuracy on three datasets, which indicates that making predictions can benefit from generating the explanation in the decision process.
arXiv Detail & Related papers (2021-03-27T14:55:19Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Learning to Rationalize for Nonmonotonic Reasoning with Distant
Supervision [44.32874972577682]
We investigate the extent to which neural models can reason about natural language rationales that explain model predictions.
We use pre-trained language models, neural knowledge models, and distant supervision from related tasks.
Our model shows promises at generating post-hoc rationales explaining why an inference is more or less likely given the additional information.
arXiv Detail & Related papers (2020-12-14T23:50:20Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - Are Visual Explanations Useful? A Case Study in Model-in-the-Loop
Prediction [49.254162397086006]
We study explanations based on visual saliency in an image-based age prediction task.
We find that presenting model predictions improves human accuracy.
However, explanations of various kinds fail to significantly alter human accuracy or trust in the model.
arXiv Detail & Related papers (2020-07-23T20:39:40Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z) - Uncovering the Data-Related Limits of Human Reasoning Research: An
Analysis based on Recommender Systems [1.7478203318226309]
Cognitive science pursues the goal of modeling human-like intelligence from a theory-driven perspective.
Syllogistic reasoning is one of the core domains of human reasoning research.
Recent analyses of models' predictive performances revealed a stagnation in improvement.
arXiv Detail & Related papers (2020-03-11T10:12:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.