A Regression Testing Framework with Automated Assertion Generation for Machine Learning Notebooks
- URL: http://arxiv.org/abs/2509.13656v1
- Date: Wed, 17 Sep 2025 03:05:16 GMT
- Title: A Regression Testing Framework with Automated Assertion Generation for Machine Learning Notebooks
- Authors: Yingao Elaine Yao, Vedant Nimje, Varun Viswanath, Saikat Dutta,
- Abstract summary: We introduce NBTest - the first regression testing framework that allows developers to write cell-level assertions in notebooks.<n> NBTest offers a library of assertion APIs, and a JupyterLab plugin that enables executing assertions.<n>We evaluate NBTest on 592 Kaggle notebooks.
- Score: 2.5834567990387565
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Notebooks have become the de-facto choice for data scientists and machine learning engineers for prototyping and experimenting with machine learning (ML) pipelines. Notebooks provide an interactive interface for code, data, and visualization. However, notebooks provide very limited support for testing. Thus, during continuous development, many subtle bugs that do not lead to crashes often go unnoticed and cause silent errors that manifest as performance regressions. To address this, we introduce NBTest - the first regression testing framework that allows developers to write cell-level assertions in notebooks and run such notebooks in pytest or in continuous integration (CI) pipelines. NBTest offers a library of assertion APIs, and a JupyterLab plugin that enables executing assertions. We also develop the first automated approach for generating cell-level assertions for key components in ML notebooks, such as data processing, model building, and model evaluation. NBTest aims to improve the reliability and maintainability of ML notebooks without adding developer burden. We evaluate NBTest on 592 Kaggle notebooks. Overall, NBTest generates 21163 assertions (35.75 on average per notebook). The generated assertions obtain a mutation score of 0.57 in killing ML-specific mutations. NBTest can catch regression bugs in previous versions of the Kaggle notebooks using assertions generated for the latest versions. Because ML pipelines involve non deterministic computations, the assertions can be flaky. Hence, we also show how NBTest leverages statistical techniques to minimize flakiness while retaining high fault-detection effectiveness. NBTest has been adopted in the CI of a popular ML library. Further, we perform a user study with 17 participants that shows that notebook users find NBTest intuitive (Rating 4.3/5) and useful in writing assertions and testing notebooks (Rating 4.24/5).
Related papers
- ImpossibleBench: Measuring LLMs' Propensity of Exploiting Test Cases [58.411135609139855]
"Shortcuts" to complete tasks pose significant risks for reliable assessment and deployment of large language models.<n>We introduce ImpossibleBench, a benchmark framework that measures LLM agents' propensity to exploit test cases.<n>As a practical framework, ImpossibleBench is not just an evaluation but a versatile tool.
arXiv Detail & Related papers (2025-10-23T06:58:32Z) - JunoBench: A Benchmark Dataset of Crashes in Python Machine Learning Jupyter Notebooks [4.768285672660128]
We introduce JunoBench, the first benchmark dataset of real-world crashes in Python-based ML notebooks.<n>JunoBench includes 111 curated and reproducible crashes with verified fixes from public Kaggle notebooks.
arXiv Detail & Related papers (2025-10-20T18:46:43Z) - Learning to Generate Unit Tests for Automated Debugging [52.63217175637201]
Unit tests (UTs) play an instrumental role in assessing code correctness as well as providing feedback to large language models (LLMs)<n>We propose UTGen, which teaches LLMs to generate unit test inputs that reveal errors along with their correct expected outputs.<n>We show that UTGen outperforms other LLM-based baselines by 7.59% based on a metric measuring the presence of both error-revealing UT inputs and correct UT outputs.
arXiv Detail & Related papers (2025-02-03T18:51:43Z) - AugmenTest: Enhancing Tests with LLM-Driven Oracles [2.159639193866661]
AugmenTest is an approach leveraging Large Language Models to infer correct test oracles based on available documentation of the software under test.<n>AugmenTest includes four variants: Simple Prompt, Extended Prompt, RAG with a generic prompt (without the context of class or method under test), and RAG with Simple Prompt, each offering different levels of contextual information to the LLMs.<n>Results show that in the most conservative scenario, AugmenTest's Extended Prompt consistently outperformed the Simple Prompt, achieving a success rate of 30% for generating correct assertions.
arXiv Detail & Related papers (2025-01-29T07:45:41Z) - STAMP: Outlier-Aware Test-Time Adaptation with Stable Memory Replay [76.06127233986663]
Test-time adaptation (TTA) aims to address the distribution shift between the training and test data with only unlabeled data at test time.
This paper pays attention to the problem that conducts both sample recognition and outlier rejection during inference while outliers exist.
We propose a new approach called STAble Memory rePlay (STAMP), which performs optimization over a stable memory bank instead of the risky mini-batch.
arXiv Detail & Related papers (2024-07-22T16:25:41Z) - GPT-HateCheck: Can LLMs Write Better Functional Tests for Hate Speech Detection? [50.53312866647302]
HateCheck is a suite for testing fine-grained model functionalities on synthesized data.
We propose GPT-HateCheck, a framework to generate more diverse and realistic functional tests from scratch.
Crowd-sourced annotation demonstrates that the generated test cases are of high quality.
arXiv Detail & Related papers (2024-02-23T10:02:01Z) - Observation-based unit test generation at Meta [52.4716552057909]
TestGen automatically generates unit tests, carved from serialized observations of complex objects, observed during app execution.
TestGen has landed 518 tests into production, which have been executed 9,617,349 times in continuous integration, finding 5,702 faults.
Our evaluation reveals that, when carving its observations from 4,361 reliable end-to-end tests, TestGen was able to generate tests for at least 86% of the classes covered by end-to-end tests.
arXiv Detail & Related papers (2024-02-09T00:34:39Z) - Effective Test Generation Using Pre-trained Large Language Models and
Mutation Testing [13.743062498008555]
We introduce MuTAP for improving the effectiveness of test cases generated by Large Language Models (LLMs) in terms of revealing bugs.
MuTAP is capable of generating effective test cases in the absence of natural language descriptions of the Program Under Test (PUTs)
Our results show that our proposed method is able to detect up to 28% more faulty human-written code snippets.
arXiv Detail & Related papers (2023-08-31T08:48:31Z) - AdaNPC: Exploring Non-Parametric Classifier for Test-Time Adaptation [64.9230895853942]
Domain generalization can be arbitrarily hard without exploiting target domain information.
Test-time adaptive (TTA) methods are proposed to address this issue.
In this work, we adopt Non-Parametric to perform the test-time Adaptation (AdaNPC)
arXiv Detail & Related papers (2023-04-25T04:23:13Z) - An Empirical Evaluation of Using Large Language Models for Automated
Unit Test Generation [3.9762912548964864]
This paper presents a large-scale empirical evaluation on the effectiveness of Large Language Models for automated unit test generation.
We implement our approach in TestPilot, a test generation tool for JavaScript that automatically generates unit tests for all API functions in an npm package.
We find that 92.8% of TestPilot's generated tests have no more than 50% similarity with existing tests.
arXiv Detail & Related papers (2023-02-13T17:13:41Z) - Large Language Models are Few-shot Testers: Exploring LLM-based General
Bug Reproduction [14.444294152595429]
The number of tests added in open source repositories due to issues was about 28% of the corresponding project test suite size.
We propose LIBRO, a framework that uses Large Language Models (LLMs), which have been shown to be capable of performing code-related tasks.
Our evaluation of LIBRO shows that, on the widely studied Defects4J benchmark, LIBRO can generate failure reproducing test cases for 33% of all studied cases.
arXiv Detail & Related papers (2022-09-23T10:50:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.