Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions
- URL: http://arxiv.org/abs/2410.20019v1
- Date: Sat, 26 Oct 2024 00:35:15 GMT
- Title: Attacks against Abstractive Text Summarization Models through Lead Bias and Influence Functions
- Authors: Poojitha Thota, Shirin Nilizadeh,
- Abstract summary: Large Language Models are vulnerable to adversarial perturbations and data poisoning attacks.
In this work, we unveil a novel approach by exploiting the inherent lead bias in summarization models.
We also introduce an innovative application of influence functions, to execute data poisoning, which compromises the model's integrity.
- Score: 1.7863534204867277
- License:
- Abstract: Large Language Models have introduced novel opportunities for text comprehension and generation. Yet, they are vulnerable to adversarial perturbations and data poisoning attacks, particularly in tasks like text classification and translation. However, the adversarial robustness of abstractive text summarization models remains less explored. In this work, we unveil a novel approach by exploiting the inherent lead bias in summarization models, to perform adversarial perturbations. Furthermore, we introduce an innovative application of influence functions, to execute data poisoning, which compromises the model's integrity. This approach not only shows a skew in the models behavior to produce desired outcomes but also shows a new behavioral change, where models under attack tend to generate extractive summaries rather than abstractive summaries.
Related papers
- SA-Attack: Improving Adversarial Transferability of Vision-Language
Pre-training Models via Self-Augmentation [56.622250514119294]
In contrast to white-box adversarial attacks, transfer attacks are more reflective of real-world scenarios.
We propose a self-augment-based transfer attack method, termed SA-Attack.
arXiv Detail & Related papers (2023-12-08T09:08:50Z) - How Well Do Text Embedding Models Understand Syntax? [50.440590035493074]
The ability of text embedding models to generalize across a wide range of syntactic contexts remains under-explored.
Our findings reveal that existing text embedding models have not sufficiently addressed these syntactic understanding challenges.
We propose strategies to augment the generalization ability of text embedding models in diverse syntactic scenarios.
arXiv Detail & Related papers (2023-11-14T08:51:00Z) - Correcting Diverse Factual Errors in Abstractive Summarization via
Post-Editing and Language Model Infilling [56.70682379371534]
We show that our approach vastly outperforms prior methods in correcting erroneous summaries.
Our model -- FactEdit -- improves factuality scores by over 11 points on CNN/DM and over 31 points on XSum.
arXiv Detail & Related papers (2022-10-22T07:16:19Z) - The Factual Inconsistency Problem in Abstractive Text Summarization: A
Survey [25.59111855107199]
neural encoder-decoder models pioneered by Seq2Seq framework have been proposed to achieve the goal of generating more abstractive summaries.
At a high level, such neural models can freely generate summaries without any constraint on the words or phrases used.
However, the neural model's abstraction ability is a double-edged sword.
arXiv Detail & Related papers (2021-04-30T08:46:13Z) - SummVis: Interactive Visual Analysis of Models, Data, and Evaluation for
Text Summarization [14.787106201073154]
SummVis is an open-source tool for visualizing abstractive summaries.
It enables fine-grained analysis of the models, data, and evaluation metrics associated with text summarization.
arXiv Detail & Related papers (2021-04-15T17:13:00Z) - On the Transferability of Adversarial Attacksagainst Neural Text
Classifier [121.6758865857686]
We investigate the transferability of adversarial examples for text classification models.
We propose a genetic algorithm to find an ensemble of models that can induce adversarial examples to fool almost all existing models.
We derive word replacement rules that can be used for model diagnostics from these adversarial examples.
arXiv Detail & Related papers (2020-11-17T10:45:05Z) - Understanding Neural Abstractive Summarization Models via Uncertainty [54.37665950633147]
seq2seq abstractive summarization models generate text in a free-form manner.
We study the entropy, or uncertainty, of the model's token-level predictions.
We show that uncertainty is a useful perspective for analyzing summarization and text generation models more broadly.
arXiv Detail & Related papers (2020-10-15T16:57:27Z) - Multi-Fact Correction in Abstractive Text Summarization [98.27031108197944]
Span-Fact is a suite of two factual correction models that leverages knowledge learned from question answering models to make corrections in system-generated summaries via span selection.
Our models employ single or multi-masking strategies to either iteratively or auto-regressively replace entities in order to ensure semantic consistency w.r.t. the source text.
Experiments show that our models significantly boost the factual consistency of system-generated summaries without sacrificing summary quality in terms of both automatic metrics and human evaluation.
arXiv Detail & Related papers (2020-10-06T02:51:02Z) - Generating (Factual?) Narrative Summaries of RCTs: Experiments with
Neural Multi-Document Summarization [22.611879349101596]
We evaluate modern neural models for abstractive summarization of relevant article abstracts from systematic reviews.
We find that modern summarization systems yield consistently fluent and relevant synopses, but that they are not always factual.
arXiv Detail & Related papers (2020-08-25T22:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.