jscefr: A Framework to Evaluate the Code Proficiency for JavaScript
- URL: http://arxiv.org/abs/2408.16452v1
- Date: Thu, 29 Aug 2024 11:37:49 GMT
- Title: jscefr: A Framework to Evaluate the Code Proficiency for JavaScript
- Authors: Chaiyong Ragkhitwetsagul, Komsan Kongwongsupak, Thanakrit Maneesawas, Natpichsinee Puttiwarodom, Ruksit Rojpaisarnkit, Morakot Choetkiertikul, Raula Gaikovina Kula, Thanwadee Sunetnanta,
- Abstract summary: jscefr (pronounced jes-cee-fer) is a tool that detects the use of different elements of the JavaScript (JS) language.
jscefr categorizes JS code into six levels based on proficiency.
- Score: 1.7174932174564534
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper, we present jscefr (pronounced jes-cee-fer), a tool that detects the use of different elements of the JavaScript (JS) language, effectively measuring the level of proficiency required to comprehend and deal with a fragment of JavaScript code in software maintenance tasks. Based on the pycefr tool, the tool incorporates JavaScript elements and the well-known Common European Framework of Reference for Languages (CEFR) and utilizes the official ECMAScript JavaScript documentation from the Mozilla Developer Network. jscefr categorizes JS code into six levels based on proficiency. jscefr can detect and classify 138 different JavaScript code constructs. To evaluate, we apply our tool to three JavaScript projects of the NPM ecosystem, with interesting results. A video demonstrating the tool's availability and usage is available at https://youtu.be/Ehh-Prq59Pc.
Related papers
- Mutation-Based Deep Learning Framework Testing Method in JavaScript Environment [16.67312523556796]
We propose a mutation-based JavaScript DL framework testing method named DLJSFuzzer.
DLJSFuzzer successfully detects 21 unique crashes and unique 126 NaN & Inconsistency bugs.
DLJSFuzzer has improved by over 47% in model generation efficiency and over 91% in bug detection efficiency compared to all baselines.
arXiv Detail & Related papers (2024-09-23T12:37:56Z) - CRUXEval-X: A Benchmark for Multilingual Code Reasoning, Understanding and Execution [50.7413285637879]
The CRUXEVAL-X code reasoning benchmark contains 19 programming languages.
It comprises at least 600 subjects for each language, along with 19K content-consistent tests in total.
Even a model trained solely on Python can achieve at most 34.4% Pass@1 in other languages.
arXiv Detail & Related papers (2024-08-23T11:43:00Z) - Blocking Tracking JavaScript at the Function Granularity [15.86649576818013]
Not.js is a fine grained JavaScript blocking tool that operates at the function level granularity.
Not.js trains a supervised machine learning classifier on a webpage's graph representation to first detect tracking at the JavaScript function level.
Not.js then automatically generates surrogate scripts that preserve functionality while removing tracking.
arXiv Detail & Related papers (2024-05-28T17:26:57Z) - FV8: A Forced Execution JavaScript Engine for Detecting Evasive Techniques [53.288368877654705]
FV8 is a modified V8 JavaScript engine designed to identify evasion techniques in JavaScript code.
It selectively enforces code execution on APIs that conditionally inject dynamic code.
It identifies 1,443 npm packages and 164 (82%) extensions containing at least one type of evasion.
arXiv Detail & Related papers (2024-05-21T19:54:19Z) - Concolic Testing of JavaScript using Sparkplug [6.902028735328818]
Insitu concolic testing for JS is effective but slow and complex.
Our method enhances tracing with V8 Sparkplug baseline compiler and remill libraries for assembly to LLVM IR conversion.
arXiv Detail & Related papers (2024-05-10T22:11:53Z) - A Study of Vulnerability Repair in JavaScript Programs with Large Language Models [2.4622939109173885]
Large Language Models (LLMs) have demonstrated substantial advancements across multiple domains.
Our experiments on real-world software vulnerabilities show that while LLMs are promising in automatic program repair of JavaScript code, achieving a correct bug fix often requires an appropriate amount of context in the prompt.
arXiv Detail & Related papers (2024-03-19T23:04:03Z) - Static Semantics Reconstruction for Enhancing JavaScript-WebAssembly Multilingual Malware Detection [51.15122099046214]
WebAssembly allows attackers to hide the malicious functionalities of JavaScript malware in cross-language interoperations.
The detection of JavaScript-WebAssembly multilingual malware (JWMM) is challenging due to the complex interoperations and semantic diversity between JavaScript and WebAssembly.
We present JWBinder, the first technique aimed at enhancing the static detection of JWMM.
arXiv Detail & Related papers (2023-10-26T10:59:45Z) - GlotScript: A Resource and Tool for Low Resource Writing System Identification [53.56700754408902]
GlotScript is an open resource for low resource writing system identification.
GlotScript-R provides attested writing systems for more than 7,000 languages.
GlotScript-T is a writing system identification tool that covers all 161 Unicode 15.0 scripts.
arXiv Detail & Related papers (2023-09-23T09:35:55Z) - JavaScript Dead Code Identification, Elimination, and Empirical
Assessment [13.566269406958966]
We present Lacuna, an approach for automatically detecting and eliminating JavaScript dead code from web apps.
We conduct an experiment to empirically evaluate the run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of mobile web apps.
arXiv Detail & Related papers (2023-08-31T13:48:39Z) - ChatIE: Zero-Shot Information Extraction via Chatting with ChatGPT [89.49161588240061]
Zero-shot information extraction (IE) aims to build IE systems from the unannotated text.
Recent efforts on large language models (LLMs, e.g., GPT-3, ChatGPT) show promising performance on zero-shot settings.
We transform the zero-shot IE task into a multi-turn question-answering problem with a two-stage framework (ChatIE)
arXiv Detail & Related papers (2023-02-20T12:57:12Z) - Contrastive Code Representation Learning [95.86686147053958]
We show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics.
We propose ContraCode: a contrastive pre-training task that learns code functionality, not form.
arXiv Detail & Related papers (2020-07-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.