Understanding Factual Errors in Summarization: Errors, Summarizers,
Datasets, Error Detectors
- URL: http://arxiv.org/abs/2205.12854v2
- Date: Fri, 26 May 2023 00:21:51 GMT
- Title: Understanding Factual Errors in Summarization: Errors, Summarizers,
Datasets, Error Detectors
- Authors: Liyan Tang, Tanya Goyal, Alexander R. Fabbri, Philippe Laban, Jiacheng
Xu, Semih Yavuz, Wojciech Kry\'sci\'nski, Justin F. Rousseau, Greg Durrett
- Abstract summary: In this work, we aggregate factuality error annotations from nine existing datasets and stratify them according to the underlying summarization model.
We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.
- Score: 105.12462629663757
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The propensity of abstractive summarization models to make factual errors has
been studied extensively, including design of metrics to detect factual errors
and annotation of errors in current systems' outputs. However, the
ever-evolving nature of summarization systems, metrics, and annotated
benchmarks makes factuality evaluation a moving target, and drawing clear
comparisons among metrics has become increasingly difficult. In this work, we
aggregate factuality error annotations from nine existing datasets and stratify
them according to the underlying summarization model. We compare performance of
state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on
this stratified benchmark and show that their performance varies significantly
across different types of summarization models. Critically, our analysis shows
that much of the recent improvement in the factuality detection space has been
on summaries from older (pre-Transformer) models instead of more relevant
recent summarization models. We further perform a finer-grained analysis per
error-type and find similar performance variance across error types for
different factuality metrics. Our results show that no one metric is superior
in all settings or for all error types, and we provide recommendations for best
practices given these insights.
Related papers
- Detecting Errors through Ensembling Prompts (DEEP): An End-to-End LLM Framework for Detecting Factual Errors [11.07539342949602]
We propose an end-to-end framework for detecting factual errors in text summarization.
Our framework uses a diverse set of LLM prompts to identify factual inconsistencies.
We calibrate the ensembled models to produce empirically accurate probabilities that a text is factually consistent or free of hallucination.
arXiv Detail & Related papers (2024-06-18T18:59:37Z) - Generalizable Error Modeling for Human Data Annotation: Evidence From an Industry-Scale Search Data Annotation Program [0.0]
This paper presents a predictive error model trained to detect potential errors in search relevance annotation tasks.
We show that errors can be predicted with moderate model performance (AUC=0.65-0.75) and that model performance generalizes well across applications.
We demonstrate the usefulness of the model in the context of auditing, where prioritizing tasks with high predicted error probabilities considerably increases the amount of corrected annotation errors.
arXiv Detail & Related papers (2023-10-08T21:21:19Z) - BUMP: A Benchmark of Unfaithful Minimal Pairs for Meta-Evaluation of
Faithfulness Metrics [70.52570641514146]
We present a benchmark of unfaithful minimal pairs (BUMP)
BUMP is a dataset of 889 human-written, minimally different summary pairs.
Unlike non-pair-based datasets, BUMP can be used to measure the consistency of metrics.
arXiv Detail & Related papers (2022-12-20T02:17:30Z) - Discover, Explanation, Improvement: An Automatic Slice Detection
Framework for Natural Language Processing [72.14557106085284]
slice detection models (SDM) automatically identify underperforming groups of datapoints.
This paper proposes a benchmark named "Discover, Explain, improve (DEIM)" for classification NLP tasks.
Our evaluation shows that Edisa can accurately select error-prone datapoints with informative semantic features.
arXiv Detail & Related papers (2022-11-08T19:00:00Z) - Correcting Diverse Factual Errors in Abstractive Summarization via
Post-Editing and Language Model Infilling [56.70682379371534]
We show that our approach vastly outperforms prior methods in correcting erroneous summaries.
Our model -- FactEdit -- improves factuality scores by over 11 points on CNN/DM and over 31 points on XSum.
arXiv Detail & Related papers (2022-10-22T07:16:19Z) - Label-Descriptive Patterns and their Application to Characterizing
Classification Errors [31.272875287136426]
State-of-the-art deep learning methods achieve human-like performance on many tasks, but make errors nevertheless.
Characterizing these errors in easily interpretable terms gives insight into whether a model is prone to making systematic errors, but also gives a way to act and improve the model.
In this paper we propose a method that allows us to do so for arbitrary classifiers by mining a small set of patterns that together succinctly describe the input data that is partitioned according to correctness of prediction.
arXiv Detail & Related papers (2021-10-18T19:42:21Z) - Understanding Factuality in Abstractive Summarization with FRANK: A
Benchmark for Factuality Metrics [17.677637487977208]
Modern summarization models generate highly fluent but often factually unreliable outputs.
Due to the lack of common benchmarks, metrics attempting to measure the factuality of automatically generated summaries cannot be compared.
We devise a typology of factual errors and use it to collect human annotations of generated summaries from state-of-the-art summarization systems.
arXiv Detail & Related papers (2021-04-27T17:28:07Z) - Good Classifiers are Abundant in the Interpolating Regime [64.72044662855612]
We develop a methodology to compute precisely the full distribution of test errors among interpolating classifiers.
We find that test errors tend to concentrate around a small typical value $varepsilon*$, which deviates substantially from the test error of worst-case interpolating model.
Our results show that the usual style of analysis in statistical learning theory may not be fine-grained enough to capture the good generalization performance observed in practice.
arXiv Detail & Related papers (2020-06-22T21:12:31Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.