Searching for a Search Method: Benchmarking Search Algorithms for
Generating NLP Adversarial Examples
- URL: http://arxiv.org/abs/2009.06368v2
- Date: Mon, 12 Oct 2020 19:46:36 GMT
- Title: Searching for a Search Method: Benchmarking Search Algorithms for
Generating NLP Adversarial Examples
- Authors: Jin Yong Yoo, John X. Morris, Eli Lifland, Yanjun Qi
- Abstract summary: We study the behavior of several black-box search algorithms used for generating adversarial examples for natural language processing (NLP) tasks.
We perform a fine-grained analysis of three elements relevant to search: search algorithm, search space, and search budget.
- Score: 10.993342896547691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the behavior of several black-box search algorithms used for
generating adversarial examples for natural language processing (NLP) tasks. We
perform a fine-grained analysis of three elements relevant to search: search
algorithm, search space, and search budget. When new search algorithms are
proposed in past work, the attack search space is often modified alongside the
search algorithm. Without ablation studies benchmarking the search algorithm
change with the search space held constant, one cannot tell if an increase in
attack success rate is a result of an improved search algorithm or a less
restrictive search space. Additionally, many previous studies fail to properly
consider the search algorithms' run-time cost, which is essential for
downstream tasks like adversarial training. Our experiments provide a
reproducible benchmark of search algorithms across a variety of search spaces
and query budgets to guide future research in adversarial NLP. Based on our
experiments, we recommend greedy attacks with word importance ranking when
under a time constraint or attacking long inputs, and either beam search or
particle swarm optimization otherwise. Code implementation shared via
https://github.com/QData/TextAttack-Search-Benchmark
Related papers
- RethinkMCTS: Refining Erroneous Thoughts in Monte Carlo Tree Search for Code Generation [65.5353313491402]
We introduce RethinkMCTS, which employs the Monte Carlo Tree Search (MCTS) algorithm to conduct thought-level searches before generating code.
We construct verbal feedback from fine-turbo code execution feedback to refine erroneous thoughts during the search.
We demonstrate that RethinkMCTS outperforms previous search-based and feedback-based code generation baselines.
arXiv Detail & Related papers (2024-09-15T02:07:28Z) - LiteSearch: Efficacious Tree Search for LLM [70.29796112457662]
This study introduces a novel guided tree search algorithm with dynamic node selection and node-level exploration budget.
Experiments conducted on the GSM8K and TabMWP datasets demonstrate that our approach enjoys significantly lower computational costs compared to baseline methods.
arXiv Detail & Related papers (2024-06-29T05:14:04Z) - Rectangle Search: An Anytime Beam Search (Extended Version) [9.59799149404787]
Anytime search algorithms try to find a (potentially suboptimal) solution as quickly as possible.
We propose a new algorithm, rectangle search, that is instead based on beam search.
arXiv Detail & Related papers (2023-12-19T19:50:45Z) - CrossBeam: Learning to Search in Bottom-Up Program Synthesis [51.37514793318815]
We propose training a neural model to learn a hands-on search policy for bottom-up synthesis.
Our approach, called CrossBeam, uses the neural model to choose how to combine previously-explored programs into new programs.
We observe that CrossBeam learns to search efficiently, exploring much smaller portions of the program space compared to the state-of-the-art.
arXiv Detail & Related papers (2022-03-20T04:41:05Z) - Exploring Complicated Search Spaces with Interleaving-Free Sampling [127.07551427957362]
In this paper, we build the search algorithm upon a complicated search space with long-distance connections.
We present a simple yet effective algorithm named textbfIF-NAS, where we perform a periodic sampling strategy to construct different sub-networks.
In the proposed search space, IF-NAS outperform both random sampling and previous weight-sharing search algorithms by a significant margin.
arXiv Detail & Related papers (2021-12-05T06:42:48Z) - Don't Search for a Search Method -- Simple Heuristics Suffice for
Adversarial Text Attacks [11.196974000738729]
We implement an algorithm inspired by zeroth order optimization-based attacks and compare with the benchmark results in the TextAttack framework.
Surprisingly, we find that optimization-based methods do not yield any improvement in a constrained setup.
We conclude from these results that current TextAttack benchmark tasks are too easy and constraints are too strict, preventing meaningful research on black-box adversarial text attacks.
arXiv Detail & Related papers (2021-09-16T12:22:17Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - A Meta-Heuristic Search Algorithm based on Infrasonic Mating Displays in
Peafowls [0.0]
Simplistic methods such as exhaustive search become computationally expensive and unreliable as the solution space for search algorithms increase.
This research proposes an Infrasonic Search Algorithm, inspired from the Gravitational Search Algorithm and the mating behaviour in peafowls.
arXiv Detail & Related papers (2021-06-28T09:04:51Z) - Quantum Search with Prior Knowledge [15.384459603233978]
We propose a new generalization of Grover's search algorithm which performs better than the standard Grover algorithm in average under this setting.
We prove that our new algorithm achieves the optimal expected success probability of finding the solution if the number of queries is fixed.
arXiv Detail & Related papers (2020-09-18T09:50:33Z) - Best-First Beam Search [78.71330480725668]
We show that the standard implementation of beam search can be made up to 10x faster in practice.
We propose a memory-reduced variant of Best-First Beam Search, which has a similar beneficial search bias in terms of downstream performance.
arXiv Detail & Related papers (2020-07-08T05:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.