Generating Zero-shot Abstractive Explanations for Rumour Verification
- URL: http://arxiv.org/abs/2401.12713v3
- Date: Fri, 23 Feb 2024 15:01:38 GMT
- Title: Generating Zero-shot Abstractive Explanations for Rumour Verification
- Authors: Iman Munire Bilal, Preslav Nakov, Rob Procter, Maria Liakata
- Abstract summary: We reformulate the task to generate model-centric free-text explanations of a rumour's veracity.
We exploit the few-shot learning capabilities of a large language model (LLM)
Our experiments show that LLMs can have similar agreement to humans in evaluating summaries.
- Score: 46.897767694062004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of rumour verification in social media concerns assessing the
veracity of a claim on the basis of conversation threads that result from it.
While previous work has focused on predicting a veracity label, here we
reformulate the task to generate model-centric free-text explanations of a
rumour's veracity. The approach is model agnostic in that it generalises to any
model. Here we propose a novel GNN-based rumour verification model. We follow a
zero-shot approach by first applying post-hoc explainability methods to score
the most important posts within a thread and then we use these posts to
generate informative explanations using opinion-guided summarisation. To
evaluate the informativeness of the explanatory summaries, we exploit the
few-shot learning capabilities of a large language model (LLM). Our experiments
show that LLMs can have similar agreement to humans in evaluating summaries.
Importantly, we show explanatory abstractive summaries are more informative and
better reflect the predicted rumour veracity than just using the highest
ranking posts in the thread.
Related papers
- Information-Theoretic Distillation for Reference-less Summarization [67.51150817011617]
We present a novel framework to distill a powerful summarizer based on the information-theoretic objective for summarization.
We start off from Pythia-2.8B as the teacher model, which is not yet capable of summarization.
We arrive at a compact but powerful summarizer with only 568M parameters that performs competitively against ChatGPT.
arXiv Detail & Related papers (2024-03-20T17:42:08Z) - AugSumm: towards generalizable speech summarization using synthetic
labels from large language model [61.73741195292997]
Abstractive speech summarization (SSUM) aims to generate human-like summaries from speech.
conventional SSUM models are mostly trained and evaluated with a single ground-truth (GT) human-annotated deterministic summary.
We propose AugSumm, a method to leverage large language models (LLMs) as a proxy for human annotators to generate augmented summaries.
arXiv Detail & Related papers (2024-01-10T18:39:46Z) - Active Learning for Abstractive Text Summarization [50.79416783266641]
We propose the first effective query strategy for Active Learning in abstractive text summarization.
We show that using our strategy in AL annotation helps to improve the model performance in terms of ROUGE and consistency scores.
arXiv Detail & Related papers (2023-01-09T10:33:14Z) - Transductive Learning for Abstractive News Summarization [24.03781438153328]
We propose the first application of transductive learning to summarization.
We show that our approach yields state-of-the-art results on CNN/DM and NYT datasets.
arXiv Detail & Related papers (2021-04-17T17:33:12Z) - Few-Shot Learning for Opinion Summarization [117.70510762845338]
Opinion summarization is the automatic creation of text reflecting subjective information expressed in multiple documents.
In this work, we show that even a handful of summaries is sufficient to bootstrap generation of the summary text.
Our approach substantially outperforms previous extractive and abstractive methods in automatic and human evaluation.
arXiv Detail & Related papers (2020-04-30T15:37:38Z) - Unsupervised Opinion Summarization with Noising and Denoising [85.49169453434554]
We create a synthetic dataset from a corpus of user reviews by sampling a review, pretending it is a summary, and generating noisy versions thereof.
At test time, the model accepts genuine reviews and generates a summary containing salient opinions, treating those that do not reach consensus as noise.
arXiv Detail & Related papers (2020-04-21T16:54:57Z) - Learning by Semantic Similarity Makes Abstractive Summarization Better [13.324006587838522]
We compare the generated summaries from recent LM, BART, and the reference summaries from a benchmark dataset, CNN/DM.
Interestingly, model-generated summaries receive higher scores relative to reference summaries.
arXiv Detail & Related papers (2020-02-18T17:59:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.