TASA: Deceiving Question Answering Models by Twin Answer Sentences
Attack
- URL: http://arxiv.org/abs/2210.15221v1
- Date: Thu, 27 Oct 2022 07:16:30 GMT
- Title: TASA: Deceiving Question Answering Models by Twin Answer Sentences
Attack
- Authors: Yu Cao, Dianqi Li, Meng Fang, Tianyi Zhou, Jun Gao, Yibing Zhan,
Dacheng Tao
- Abstract summary: We present Twin Answer Sentences Attack (TASA), an adversarial attack method for question answering (QA) models.
TASA produces fluent and grammatical adversarial contexts while maintaining gold answers.
- Score: 93.50174324435321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Twin Answer Sentences Attack (TASA), an adversarial attack method
for question answering (QA) models that produces fluent and grammatical
adversarial contexts while maintaining gold answers. Despite phenomenal
progress on general adversarial attacks, few works have investigated the
vulnerability and attack specifically for QA models. In this work, we first
explore the biases in the existing models and discover that they mainly rely on
keyword matching between the question and context, and ignore the relevant
contextual relations for answer prediction. Based on two biases above, TASA
attacks the target model in two folds: (1) lowering the model's confidence on
the gold answer with a perturbed answer sentence; (2) misguiding the model
towards a wrong answer with a distracting answer sentence. Equipped with
designed beam search and filtering methods, TASA can generate more effective
attacks than existing textual attack methods while sustaining the quality of
contexts, in extensive experiments on five QA datasets and human evaluations.
Related papers
- Frontier Language Models are not Robust to Adversarial Arithmetic, or
"What do I need to say so you agree 2+2=5? [88.59136033348378]
We study the problem of adversarial arithmetic, which provides a simple yet challenging testbed for language model alignment.
This problem is comprised of arithmetic questions posed in natural language, with an arbitrary adversarial string inserted before the question is complete.
We show that models can be partially hardened against these attacks via reinforcement learning and via agentic constitutional loops.
arXiv Detail & Related papers (2023-11-08T19:07:10Z) - Realistic Conversational Question Answering with Answer Selection based
on Calibrated Confidence and Uncertainty Measurement [54.55643652781891]
Conversational Question Answering (ConvQA) models aim at answering a question with its relevant paragraph and previous question-answer pairs that occurred during conversation multiple times.
We propose to filter out inaccurate answers in the conversation history based on their estimated confidences and uncertainties from the ConvQA model.
We validate our models, Answer Selection-based realistic Conversation Question Answering, on two standard ConvQA datasets.
arXiv Detail & Related papers (2023-02-10T09:42:07Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Reasoning Chain Based Adversarial Attack for Multi-hop Question
Answering [0.0]
Previous adversarial attack works usually edit the whole question sentence.
We propose a multi-hop reasoning chain based adversarial attack method.
Results demonstrate significant performance reduction on both answer and supporting facts prediction.
arXiv Detail & Related papers (2021-12-17T18:03:14Z) - How to Build Robust FAQ Chatbot with Controllable Question Generator? [5.680871239968297]
We propose a high-quality, diverse, controllable method to generate adversarial samples with a semantic graph.
The fluent and semantically generated QA pairs fool our passage retrieval model successfully.
We find that the generated data set improves the generalizability of the QA model to the new target domain.
arXiv Detail & Related papers (2021-11-18T12:54:07Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - A Semantic-based Method for Unsupervised Commonsense Question Answering [40.18557352036813]
Unsupervised commonsense question answering is appealing since it does not rely on any labeled task data.
We present a novel SEmantic-based Question Answering method (SEQA) for unsupervised commonsense question answering.
arXiv Detail & Related papers (2021-05-31T08:21:52Z) - Explain2Attack: Text Adversarial Attacks via Cross-Domain
Interpretability [18.92690624514601]
Research has shown that down-stream models can be easily fooled with adversarial inputs that look like the training data, but slightly perturbed, in a way imperceptible to humans.
In this paper, we propose Explain2Attack, a black-box adversarial attack on text classification task.
We show that our framework either achieves or out-performs attack rates of the state-of-the-art models, yet with lower queries cost and higher efficiency.
arXiv Detail & Related papers (2020-10-14T04:56:41Z) - Counterfactual Variable Control for Robust and Interpretable Question
Answering [57.25261576239862]
Deep neural network based question answering (QA) models are neither robust nor explainable in many cases.
In this paper, we inspect such spurious "capability" of QA models using causal inference.
We propose a novel approach called Counterfactual Variable Control (CVC) that explicitly mitigates any shortcut correlation.
arXiv Detail & Related papers (2020-10-12T10:09:05Z) - Do not let the history haunt you -- Mitigating Compounding Errors in
Conversational Question Answering [17.36904526340775]
We find that compounding errors occur when using previously predicted answers at test time.
We propose a sampling strategy that dynamically selects between target answers and model predictions during training.
arXiv Detail & Related papers (2020-05-12T13:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.