Concolic Testing of JavaScript using Sparkplug
- URL: http://arxiv.org/abs/2405.06832v1
- Date: Fri, 10 May 2024 22:11:53 GMT
- Title: Concolic Testing of JavaScript using Sparkplug
- Authors: Zhe Li, Fei Xie,
- Abstract summary: Insitu concolic testing for JS is effective but slow and complex.
Our method enhances tracing with V8 Sparkplug baseline compiler and remill libraries for assembly to LLVM IR conversion.
- Score: 6.902028735328818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: JavaScript is prevalent in web and server apps, handling sensitive data. JS testing methods lag behind other languages. Insitu concolic testing for JS is effective but slow and complex. Our method enhances tracing with V8 Sparkplug baseline compiler and remill libraries for assembly to LLVM IR conversion. Evaluation on 160 Node.js libraries reveals comparable coverage and bug detection in significantly less time than the in-situ method.
Related papers
- Mutation-Based Deep Learning Framework Testing Method in JavaScript Environment [16.67312523556796]
We propose a mutation-based JavaScript DL framework testing method named DLJSFuzzer.
DLJSFuzzer successfully detects 21 unique crashes and unique 126 NaN & Inconsistency bugs.
DLJSFuzzer has improved by over 47% in model generation efficiency and over 91% in bug detection efficiency compared to all baselines.
arXiv Detail & Related papers (2024-09-23T12:37:56Z) - jscefr: A Framework to Evaluate the Code Proficiency for JavaScript [1.7174932174564534]
jscefr (pronounced jes-cee-fer) is a tool that detects the use of different elements of the JavaScript (JS) language.
jscefr categorizes JS code into six levels based on proficiency.
arXiv Detail & Related papers (2024-08-29T11:37:49Z) - CRUXEval-X: A Benchmark for Multilingual Code Reasoning, Understanding and Execution [50.7413285637879]
The CRUXEVAL-X code reasoning benchmark contains 19 programming languages.
It comprises at least 600 subjects for each language, along with 19K content-consistent tests in total.
Even a model trained solely on Python can achieve at most 34.4% Pass@1 in other languages.
arXiv Detail & Related papers (2024-08-23T11:43:00Z) - DistML.js: Installation-free Distributed Deep Learning Framework for Web Browsers [40.48978035180545]
"DistML.js" is a library designed for training and inference of machine learning models within web browsers.
We provide a comprehensive explanation of DistML.js's design, API, and implementation, alongside practical applications.
arXiv Detail & Related papers (2024-07-01T07:13:14Z) - Long Code Arena: a Set of Benchmarks for Long-Context Code Models [75.70507534322336]
Long Code Arena is a suite of six benchmarks for code processing tasks that require project-wide context.
These tasks cover different aspects of code processing: library-based code generation, CI builds repair, project-level code completion, commit message generation, bug localization, and module summarization.
For each task, we provide a manually verified dataset for testing, an evaluation suite, and open-source baseline solutions.
arXiv Detail & Related papers (2024-06-17T14:58:29Z) - Blocking Tracking JavaScript at the Function Granularity [15.86649576818013]
Not.js is a fine grained JavaScript blocking tool that operates at the function level granularity.
Not.js trains a supervised machine learning classifier on a webpage's graph representation to first detect tracking at the JavaScript function level.
Not.js then automatically generates surrogate scripts that preserve functionality while removing tracking.
arXiv Detail & Related papers (2024-05-28T17:26:57Z) - FV8: A Forced Execution JavaScript Engine for Detecting Evasive Techniques [53.288368877654705]
FV8 is a modified V8 JavaScript engine designed to identify evasion techniques in JavaScript code.
It selectively enforces code execution on APIs that conditionally inject dynamic code.
It identifies 1,443 npm packages and 164 (82%) extensions containing at least one type of evasion.
arXiv Detail & Related papers (2024-05-21T19:54:19Z) - CrashJS: A NodeJS Benchmark for Automated Crash Reproduction [4.3560886861249255]
Software bugs often lead to software crashes, which cost US companies upwards of $2.08 trillion annually.
Automated Crash Reproduction aims to generate unit tests that successfully reproduce a crash.
CrashJS is a benchmark dataset of 453 Node.js crashes from several sources.
arXiv Detail & Related papers (2024-05-09T04:57:10Z) - REST: Retrieval-Based Speculative Decoding [69.06115086237207]
We introduce Retrieval-Based Speculative Decoding (REST), a novel algorithm designed to speed up language model generation.
Unlike previous methods that rely on a draft language model for speculative decoding, REST harnesses the power of retrieval to generate draft tokens.
When benchmarked on 7B and 13B language models in a single-batch setting, REST achieves a significant speedup of 1.62X to 2.36X on code or text generation.
arXiv Detail & Related papers (2023-11-14T15:43:47Z) - InterCode: Standardizing and Benchmarking Interactive Coding with
Execution Feedback [50.725076393314964]
We introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning environment.
Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution.
We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies.
arXiv Detail & Related papers (2023-06-26T17:59:50Z) - torchgfn: A PyTorch GFlowNet library [56.071033896777784]
torchgfn is a PyTorch library that aims to address this need.
It provides users with a simple API for environments and useful abstractions for samplers and losses.
arXiv Detail & Related papers (2023-05-24T00:20:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.