Intergenerational Test Generation for Natural Language Processing
Applications
- URL: http://arxiv.org/abs/2302.10499v2
- Date: Sat, 29 Jul 2023 02:12:42 GMT
- Title: Intergenerational Test Generation for Natural Language Processing
Applications
- Authors: Pin Ji, Yang Feng, Weitao Huang, Jia Liu, Zhihong Zhao
- Abstract summary: We propose an automated test generation method for detecting erroneous behaviors of various NLP applications.
We implement this method into NLPLego, which is designed to fully exploit the potential of seed sentences.
NLPLego successfully detects 1,732, 5301, and 261,879 incorrect behaviors with around 95.7% precision in three tasks.
- Score: 16.63835131985415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of modern NLP applications often relies on various benchmark
datasets containing plenty of manually labeled tests to evaluate performance.
While constructing datasets often costs many resources, the performance on the
held-out data may not properly reflect their capability in real-world
application scenarios and thus cause tremendous misunderstanding and monetary
loss. To alleviate this problem, in this paper, we propose an automated test
generation method for detecting erroneous behaviors of various NLP
applications. Our method is designed based on the sentence parsing process of
classic linguistics, and thus it is capable of assembling basic grammatical
elements and adjuncts into a grammatically correct test with proper oracle
information. We implement this method into NLPLego, which is designed to fully
exploit the potential of seed sentences to automate the test generation.
NLPLego disassembles the seed sentence into the template and adjuncts and then
generates new sentences by assembling context-appropriate adjuncts with the
template in a specific order. Unlike the taskspecific methods, the tests
generated by NLPLego have derivation relations and different degrees of
variation, which makes constructing appropriate metamorphic relations easier.
Thus, NLPLego is general, meaning it can meet the testing requirements of
various NLP applications. To validate NLPLego, we experiment with three common
NLP tasks, identifying failures in four state-of-art models. Given seed tests
from SQuAD 2.0, SST, and QQP, NLPLego successfully detects 1,732, 5301, and
261,879 incorrect behaviors with around 95.7% precision in three tasks,
respectively.
Related papers
- Benchmarking Uncertainty Quantification Methods for Large Language Models with LM-Polygraph [85.51252685938564]
Uncertainty quantification (UQ) is becoming increasingly recognized as a critical component of applications that rely on machine learning (ML)
As with other ML models, large language models (LLMs) are prone to make incorrect predictions, hallucinate'' by fabricating claims, or simply generate low-quality output for a given input.
We introduce a novel benchmark that implements a collection of state-of-the-art UQ baselines, and provides an environment for controllable and consistent evaluation of novel techniques.
arXiv Detail & Related papers (2024-06-21T20:06:31Z) - LLM-Powered Test Case Generation for Detecting Tricky Bugs [30.82169191775785]
AID generates test inputs and oracles targeting plausibly correct programs.
We evaluate AID on two large-scale datasets with tricky bugs: TrickyBugs and EvalPlus.
The evaluation results show that the recall, precision, and F1 score of AID outperform the state-of-the-art by up to 1.80x, 2.65x, and 1.66x, respectively.
arXiv Detail & Related papers (2024-04-16T06:20:06Z) - Code-Aware Prompting: A study of Coverage Guided Test Generation in Regression Setting using LLM [32.44432906540792]
We present SymPrompt, a code-aware prompting strategy for large language models in test generation.
SymPrompt enhances correct test generations by a factor of 5 and bolsters relative coverage by 26% for CodeGen2.
Notably, when applied to GPT-4, SymPrompt improves coverage by over 2x compared to baseline prompting strategies.
arXiv Detail & Related papers (2024-01-31T18:21:49Z) - Generative Judge for Evaluating Alignment [84.09815387884753]
We propose a generative judge with 13B parameters, Auto-J, designed to address these challenges.
Our model is trained on user queries and LLM-generated responses under massive real-world scenarios.
Experimentally, Auto-J outperforms a series of strong competitors, including both open-source and closed-source models.
arXiv Detail & Related papers (2023-10-09T07:27:15Z) - Reranking for Natural Language Generation from Logical Forms: A Study
based on Large Language Models [47.08364281023261]
Large language models (LLMs) have demonstrated impressive capabilities in natural language generation.
However, their output quality can be inconsistent, posing challenges for generating natural language from logical forms (LFs)
arXiv Detail & Related papers (2023-09-21T17:54:58Z) - Recitation-Augmented Language Models [85.30591349383849]
We show that RECITE is a powerful paradigm for knowledge-intensive NLP tasks.
Specifically, we show that by utilizing recitation as the intermediate step, a recite-and-answer scheme can achieve new state-of-the-art performance.
arXiv Detail & Related papers (2022-10-04T00:49:20Z) - AEON: A Method for Automatic Evaluation of NLP Test Cases [37.71980769922552]
We use AEON to evaluate test cases generated by four popular testing techniques on five datasets across three typical NLP tasks.
AEON achieves the best average precision in detecting semantic inconsistent test cases, outperforming the best baseline metric by 10%.
AEON also has the highest average precision of finding unnatural test cases, surpassing the baselines by more than 15%.
arXiv Detail & Related papers (2022-05-13T03:47:13Z) - Zero-Shot Cross-lingual Semantic Parsing [56.95036511882921]
We study cross-lingual semantic parsing as a zero-shot problem without parallel data for 7 test languages.
We propose a multi-task encoder-decoder model to transfer parsing knowledge to additional languages using only English-Logical form paired data.
Our system frames zero-shot parsing as a latent-space alignment problem and finds that pre-trained models can be improved to generate logical forms with minimal cross-lingual transfer penalty.
arXiv Detail & Related papers (2021-04-15T16:08:43Z) - Language Models are Few-Shot Learners [61.36677350504291]
We show that scaling up language models greatly improves task-agnostic, few-shot performance.
We train GPT-3, an autoregressive language model with 175 billion parameters, and test its performance in the few-shot setting.
GPT-3 achieves strong performance on many NLP datasets, including translation, question-answering, and cloze tasks.
arXiv Detail & Related papers (2020-05-28T17:29:03Z) - Beyond Accuracy: Behavioral Testing of NLP models with CheckList [66.42971817954806]
CheckList is a task-agnostic methodology for testing NLP models.
CheckList includes a matrix of general linguistic capabilities and test types that facilitate comprehensive test ideation.
In a user study, NLP practitioners with CheckList created twice as many tests, and found almost three times as many bugs as users without it.
arXiv Detail & Related papers (2020-05-08T15:48:31Z) - Probing the Natural Language Inference Task with Automated Reasoning
Tools [6.445605125467574]
The Natural Language Inference (NLI) task is an important task in modern NLP, as it asks a broad question to which many other tasks may be reducible.
We use other techniques to examine the logical structure of the NLI task.
We show how well a machine-oriented controlled natural language can be used to parse NLI sentences, and how well automated theorem provers can reason over the resulting formulae.
arXiv Detail & Related papers (2020-05-06T03:18:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.