Precisely the Point: Adversarial Augmentations for Faithful and
Informative Text Generation
- URL: http://arxiv.org/abs/2210.12367v1
- Date: Sat, 22 Oct 2022 06:38:28 GMT
- Title: Precisely the Point: Adversarial Augmentations for Faithful and
Informative Text Generation
- Authors: Wenhao Wu, Wei Li, Jiachen Liu, Xinyan Xiao, Sujian Li, Yajuan Lyu
- Abstract summary: In this paper, we conduct the first quantitative analysis on the robustness of pre-trained Seq2Seq models.
We find that even current SOTA pre-trained Seq2Seq model (BART) is still vulnerable, which leads to significant degeneration in faithfulness and informativeness for text generation tasks.
We propose a novel adversarial augmentation framework, namely AdvSeq, for improving faithfulness and informativeness of Seq2Seq models.
- Score: 45.37475848753975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Though model robustness has been extensively studied in language
understanding, the robustness of Seq2Seq generation remains understudied. In
this paper, we conduct the first quantitative analysis on the robustness of
pre-trained Seq2Seq models. We find that even current SOTA pre-trained Seq2Seq
model (BART) is still vulnerable, which leads to significant degeneration in
faithfulness and informativeness for text generation tasks. This motivated us
to further propose a novel adversarial augmentation framework, namely AdvSeq,
for generally improving faithfulness and informativeness of Seq2Seq models via
enhancing their robustness. AdvSeq automatically constructs two types of
adversarial augmentations during training, including implicit adversarial
samples by perturbing word representations and explicit adversarial samples by
word swapping, both of which effectively improve Seq2Seq robustness. Extensive
experiments on three popular text generation tasks demonstrate that AdvSeq
significantly improves both the faithfulness and informativeness of Seq2Seq
generation under both automatic and human evaluation settings.
Related papers
- Deceiving Question-Answering Models: A Hybrid Word-Level Adversarial Approach [11.817276791266284]
This paper introduces QA-Attack, a novel word-level adversarial strategy that fools QA models.
Our attention-based attack exploits the customized attention mechanism and deletion ranking strategy to identify and target specific words.
It creates deceptive inputs by carefully choosing and substituting synonyms, preserving grammatical integrity while misleading the model to produce incorrect responses.
arXiv Detail & Related papers (2024-11-12T23:54:58Z) - On the Robustness of Aspect-based Sentiment Analysis: Rethinking Model,
Data, and Training [109.9218185711916]
Aspect-based sentiment analysis (ABSA) aims at automatically inferring the specific sentiment polarities toward certain aspects of products or services behind social media texts or reviews.
We propose to enhance the ABSA robustness by systematically rethinking the bottlenecks from all possible angles, including model, data, and training.
arXiv Detail & Related papers (2023-04-19T11:07:43Z) - The Impacts of Unanswerable Questions on the Robustness of Machine
Reading Comprehension Models [0.20646127669654826]
We fine-tune three state-of-the-art language models on either SQuAD 1.1 or SQuAD 2.0 and then evaluate their robustness under adversarial attacks.
Our experiments reveal that current models fine-tuned on SQuAD 2.0 do not initially appear to be any more robust than ones fine-tuned on SQuAD 1.1.
Furthermore, we find that the robustness of models fine-tuned on SQuAD 2.0 extends to additional out-of-domain datasets.
arXiv Detail & Related papers (2023-01-31T20:51:14Z) - FRSUM: Towards Faithful Abstractive Summarization via Enhancing Factual
Robustness [56.263482420177915]
We study the faithfulness of existing systems from a new perspective of factual robustness.
We propose a novel training strategy, namely FRSUM, which teaches the model to defend against both explicit adversarial samples and implicit factual adversarial perturbations.
arXiv Detail & Related papers (2022-11-01T06:09:00Z) - Towards Improving Faithfulness in Abstractive Summarization [37.19777407790153]
We propose a Faithfulness Enhanced Summarization model (FES) to improve fidelity in abstractive summarization.
Our model outperforms strong baselines in experiments on CNN/DM and XSum.
arXiv Detail & Related papers (2022-10-04T19:52:09Z) - E2S2: Encoding-Enhanced Sequence-to-Sequence Pretraining for Language
Understanding and Generation [95.49128988683191]
Sequence-to-sequence (seq2seq) learning is a popular fashion for large-scale pretraining language models.
We propose an encoding-enhanced seq2seq pretraining strategy, namely E2S2.
E2S2 improves the seq2seq models via integrating more efficient self-supervised information into the encoders.
arXiv Detail & Related papers (2022-05-30T08:25:36Z) - Adversarial GLUE: A Multi-Task Benchmark for Robustness Evaluation of
Language Models [86.02610674750345]
Adversarial GLUE (AdvGLUE) is a new multi-task benchmark to explore and evaluate the vulnerabilities of modern large-scale language models under various types of adversarial attacks.
We apply 14 adversarial attack methods to GLUE tasks to construct AdvGLUE, which is further validated by humans for reliable annotations.
All the language models and robust training methods we tested perform poorly on AdvGLUE, with scores lagging far behind the benign accuracy.
arXiv Detail & Related papers (2021-11-04T12:59:55Z) - Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting [54.03356526990088]
We propose Sequence Span Rewriting (SSR) as a self-supervised sequence-to-sequence (seq2seq) pre-training objective.
SSR provides more fine-grained learning signals for text representations by supervising the model to rewrite imperfect spans to ground truth.
Our experiments with T5 models on various seq2seq tasks show that SSR can substantially improve seq2seq pre-training.
arXiv Detail & Related papers (2021-01-02T10:27:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.