Abstract: Semantic parsing maps natural language (NL) utterances into logical forms
(LFs), which underpins many advanced NLP problems. Semantic parsers gain
performance boosts with deep neural networks, but inherit vulnerabilities
against adversarial examples. In this paper, we provide the empirical study on
the robustness of semantic parsers in the presence of adversarial attacks.
Formally, adversaries of semantic parsing are considered to be the perturbed
utterance-LF pairs, whose utterances have exactly the same meanings as the
original ones. A scalable methodology is proposed to construct robustness test
sets based on existing benchmark corpora. Our results answered five research
questions in measuring the sate-of-the-art parsers' performance on robustness
test sets, and evaluating the effect of data augmentation.