On Robustness of Neural Semantic Parsers
- URL: http://arxiv.org/abs/2102.01563v2
- Date: Wed, 3 Feb 2021 12:19:10 GMT
- Title: On Robustness of Neural Semantic Parsers
- Authors: Shuo Huang, Zhuang Li, Lizhen Qu, Lei Pan
- Abstract summary: We provide the empirical study on the robustness of semantics in the presence of adversarial attacks.
Formally, adversaries of semantic parsing are considered to be the perturbed utterance-LF pairs.
Our results answered five research questions in measuring the sate-of-the-arts' performance on test sets, and evaluating the effect of data augmentation.
- Score: 9.176739484385932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Semantic parsing maps natural language (NL) utterances into logical forms
(LFs), which underpins many advanced NLP problems. Semantic parsers gain
performance boosts with deep neural networks, but inherit vulnerabilities
against adversarial examples. In this paper, we provide the empirical study on
the robustness of semantic parsers in the presence of adversarial attacks.
Formally, adversaries of semantic parsing are considered to be the perturbed
utterance-LF pairs, whose utterances have exactly the same meanings as the
original ones. A scalable methodology is proposed to construct robustness test
sets based on existing benchmark corpora. Our results answered five research
questions in measuring the sate-of-the-art parsers' performance on robustness
test sets, and evaluating the effect of data augmentation.
Related papers
- Enhancing adversarial robustness in Natural Language Inference using explanations [41.46494686136601]
We cast the spotlight on the underexplored task of Natural Language Inference (NLI)
We validate the usage of natural language explanation as a model-agnostic defence strategy through extensive experimentation.
We research the correlation of widely used language generation metrics with human perception, in order for them to serve as a proxy towards robust NLI models.
arXiv Detail & Related papers (2024-09-11T17:09:49Z) - Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis [25.993502776271022]
Having a large parameter space is considered one of the main suspects of the neural networks' vulnerability to adversarial example.
Previous research has demonstrated that depending on the considered model, the algorithm employed to generate adversarial examples may not function properly.
arXiv Detail & Related papers (2024-06-14T14:47:06Z) - Structural Ambiguity and its Disambiguation in Language Model Based
Parsers: the Case of Dutch Clause Relativization [2.9950872478176627]
We study how the presence of a prior sentence can resolve relative clause ambiguities.
Results show that a neurosymbolic, based on proof nets, is more open to data bias correction than an approach based on universal dependencies.
arXiv Detail & Related papers (2023-05-24T09:04:18Z) - On Robustness of Prompt-based Semantic Parsing with Large Pre-trained
Language Model: An Empirical Study on Codex [48.588772371355816]
This paper presents the first empirical study on the adversarial robustness of a large prompt-based language model of code, codex.
Our results demonstrate that the state-of-the-art (SOTA) code-language models are vulnerable to carefully crafted adversarial examples.
arXiv Detail & Related papers (2023-01-30T13:21:00Z) - In and Out-of-Domain Text Adversarial Robustness via Label Smoothing [64.66809713499576]
We study the adversarial robustness provided by various label smoothing strategies in foundational models for diverse NLP tasks.
Our experiments show that label smoothing significantly improves adversarial robustness in pre-trained models like BERT, against various popular attacks.
We also analyze the relationship between prediction confidence and robustness, showing that label smoothing reduces over-confident errors on adversarial examples.
arXiv Detail & Related papers (2022-12-20T14:06:50Z) - SUN: Exploring Intrinsic Uncertainties in Text-to-SQL Parsers [61.48159785138462]
This paper aims to improve the performance of text-to-dependence by exploring the intrinsic uncertainties in the neural network based approaches (called SUN)
Extensive experiments on five benchmark datasets demonstrate that our method significantly outperforms competitors and achieves new state-of-the-art results.
arXiv Detail & Related papers (2022-09-14T06:27:51Z) - Detecting Textual Adversarial Examples Based on Distributional
Characteristics of Data Representations [11.93653349589025]
adversarial examples are constructed by adding small non-random perturbations to correctly classified inputs.
Approaches to adversarial attacks in natural language tasks have boomed in the last five years using character-level, word-level, or phrase-level perturbations.
We propose two new reactive methods for NLP to fill this gap.
Adapted LID and MDRE obtain state-of-the-art results on character-level, word-level, and phrase-level attacks on the IMDB dataset.
arXiv Detail & Related papers (2022-04-29T02:32:02Z) - A Latent-Variable Model for Intrinsic Probing [93.62808331764072]
We propose a novel latent-variable formulation for constructing intrinsic probes.
We find empirical evidence that pre-trained representations develop a cross-lingually entangled notion of morphosyntax.
arXiv Detail & Related papers (2022-01-20T15:01:12Z) - Contextualized Semantic Distance between Highly Overlapped Texts [85.1541170468617]
Overlapping frequently occurs in paired texts in natural language processing tasks like text editing and semantic similarity evaluation.
This paper aims to address the issue with a mask-and-predict strategy.
We take the words in the longest common sequence as neighboring words and use masked language modeling (MLM) to predict the distributions on their positions.
Experiments on Semantic Textual Similarity show NDD to be more sensitive to various semantic differences, especially on highly overlapped paired texts.
arXiv Detail & Related papers (2021-10-04T03:59:15Z) - Pairwise Supervised Contrastive Learning of Sentence Representations [20.822509446824125]
PairSupCon aims to bridge semantic entailment and contradiction understanding with high-level categorical concept encoding.
We evaluate it on various downstream tasks that involve understanding sentence semantics at different granularities.
arXiv Detail & Related papers (2021-09-12T04:12:16Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.