Understanding Neural Abstractive Summarization Models via Uncertainty
- URL: http://arxiv.org/abs/2010.07882v1
- Date: Thu, 15 Oct 2020 16:57:27 GMT
- Title: Understanding Neural Abstractive Summarization Models via Uncertainty
- Authors: Jiacheng Xu, Shrey Desai, Greg Durrett
- Abstract summary: seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
- Score: 54.37665950633147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An advantage of seq2seq abstractive summarization models is that they
generate text in a free-form manner, but this flexibility makes it difficult to
interpret model behavior. In this work, we analyze summarization decoders in
both blackbox and whitebox ways by studying on the entropy, or uncertainty, of
the model's token-level predictions. For two strong pre-trained models, PEGASUS
and BART on two summarization datasets, we find a strong correlation between
low prediction entropy and where the model copies tokens rather than generating
novel text. The decoder's uncertainty also connects to factors like sentence
position and syntactic distance between adjacent pairs of tokens, giving a
sense of what factors make a context particularly selective for the model's
next output token. Finally, we study the relationship of decoder uncertainty
and attention behavior to understand how attention gives rise to these observed
effects in the model. We show that uncertainty is a useful perspective for
analyzing summarization and text generation models more broadly.
Related papers
- Sub-graph Based Diffusion Model for Link Prediction [43.15741675617231]
Denoising Diffusion Probabilistic Models (DDPMs) represent a contemporary class of generative models with exceptional qualities.
We build a novel generative model for link prediction using a dedicated design to decompose the likelihood estimation process via the Bayesian formula.
Our proposed method presents numerous advantages: (1) transferability across datasets without retraining, (2) promising generalization on limited training data, and (3) robustness against graph adversarial attacks.
arXiv Detail & Related papers (2024-09-13T02:23:55Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - ChiroDiff: Modelling chirographic data with Diffusion Models [132.5223191478268]
We introduce a powerful model-class namely "Denoising Diffusion Probabilistic Models" or DDPMs for chirographic data.
Our model named "ChiroDiff", being non-autoregressive, learns to capture holistic concepts and therefore remains resilient to higher temporal sampling rate.
arXiv Detail & Related papers (2023-04-07T15:17:48Z) - Towards Improving Faithfulness in Abstractive Summarization [37.19777407790153]
We propose a Faithfulness Enhanced Summarization model (FES) to improve fidelity in abstractive summarization.
Our model outperforms strong baselines in experiments on CNN/DM and XSum.
arXiv Detail & Related papers (2022-10-04T19:52:09Z) - Pathologies of Pre-trained Language Models in Few-shot Fine-tuning [50.3686606679048]
We show that pre-trained language models with few examples show strong prediction bias across labels.
Although few-shot fine-tuning can mitigate the prediction bias, our analysis shows models gain performance improvement by capturing non-task-related features.
These observations alert that pursuing model performance with fewer examples may incur pathological prediction behavior.
arXiv Detail & Related papers (2022-04-17T15:55:18Z) - Bayesian Graph Contrastive Learning [55.36652660268726]
We propose a novel perspective of graph contrastive learning methods showing random augmentations leads to encoders.
Our proposed method represents each node by a distribution in the latent space in contrast to existing techniques which embed each node to a deterministic vector.
We show a considerable improvement in performance compared to existing state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2021-12-15T01:45:32Z) - On the Lack of Robust Interpretability of Neural Text Classifiers [14.685352584216757]
We assess the robustness of interpretations of neural text classifiers based on pretrained Transformer encoders.
Both tests show surprising deviations from expected behavior, raising questions about the extent of insights that practitioners may draw from interpretations.
arXiv Detail & Related papers (2021-06-08T18:31:02Z) - Dissecting Generation Modes for Abstractive Summarization Models via
Ablation and Attribution [34.2658286826597]
We propose a two-step method to interpret summarization model decisions.
We first analyze the model's behavior by ablating the full model to categorize each decoder decision into one of several generation modes.
After isolating decisions that do depend on the input, we explore interpreting these decisions using several different attribution methods.
arXiv Detail & Related papers (2021-06-03T00:54:16Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.