On Uncertainty Calibration and Selective Generation in Probabilistic
Neural Summarization: A Benchmark Study
- URL: http://arxiv.org/abs/2304.08653v1
- Date: Mon, 17 Apr 2023 23:06:28 GMT
- Title: On Uncertainty Calibration and Selective Generation in Probabilistic
Neural Summarization: A Benchmark Study
- Authors: Polina Zablotskaia, Du Phan, Joshua Maynez, Shashi Narayan, Jie Ren,
Jeremiah Liu
- Abstract summary: Modern deep models for summarization attains impressive benchmark performance, but they are prone to generating miscalibrated predictive uncertainty.
This means that they assign high confidence to low-quality predictions, leading to compromised reliability and trustworthiness in real-world applications.
Probabilistic deep learning methods are common solutions to the miscalibration problem, but their relative effectiveness in complex autoregressive summarization tasks are not well-understood.
- Score: 14.041071717005362
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern deep models for summarization attains impressive benchmark
performance, but they are prone to generating miscalibrated predictive
uncertainty. This means that they assign high confidence to low-quality
predictions, leading to compromised reliability and trustworthiness in
real-world applications. Probabilistic deep learning methods are common
solutions to the miscalibration problem. However, their relative effectiveness
in complex autoregressive summarization tasks are not well-understood. In this
work, we thoroughly investigate different state-of-the-art probabilistic
methods' effectiveness in improving the uncertainty quality of the neural
summarization models, across three large-scale benchmarks with varying
difficulty. We show that the probabilistic methods consistently improve the
model's generation and uncertainty quality, leading to improved selective
generation performance (i.e., abstaining from low-quality summaries) in
practice. We also reveal notable failure patterns of probabilistic methods
widely-adopted in NLP community (e.g., Deep Ensemble and Monte Carlo Dropout),
cautioning the importance of choosing appropriate method for the data setting.
Related papers
- Achieving Well-Informed Decision-Making in Drug Discovery: A Comprehensive Calibration Study using Neural Network-Based Structure-Activity Models [4.619907534483781]
computational models that predict drug-target interactions are valuable tools to accelerate the development of new therapeutic agents.
However, such models can be poorly calibrated, which results in unreliable uncertainty estimates.
We show that combining post hoc calibration method with well-performing uncertainty quantification approaches can boost model accuracy and calibration.
arXiv Detail & Related papers (2024-07-19T10:29:00Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Exploring Predictive Uncertainty and Calibration in NLP: A Study on the
Impact of Method & Data Scarcity [7.3372471678239215]
We assess the quality of estimates from a wide array of approaches and their dependence on the amount of available data.
We find that while approaches based on pre-trained models and ensembles achieve the best results overall, the quality of uncertainty estimates can surprisingly suffer with more data.
arXiv Detail & Related papers (2022-10-20T15:42:02Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Variance based sensitivity analysis for Monte Carlo and importance
sampling reliability assessment with Gaussian processes [0.0]
We propose a methodology to quantify the sensitivity of the probability of failure estimator to two uncertainty sources.
This analysis also enables to control the whole error associated to the failure probability estimate and thus provides an accuracy criterion on the estimation.
The approach is proposed for both a Monte Carlo based method as well as an importance sampling based method, seeking to improve the estimation of rare event probabilities.
arXiv Detail & Related papers (2020-11-30T17:06:28Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.