JavaScript Dead Code Identification, Elimination, and Empirical
Assessment
- URL: http://arxiv.org/abs/2308.16729v1
- Date: Thu, 31 Aug 2023 13:48:39 GMT
- Title: JavaScript Dead Code Identification, Elimination, and Empirical
Assessment
- Authors: Ivano Malavolta, Kishan Nirghin, Gian Luca Scoccia, Simone Romano,
Salvatore Lombardi, Giuseppe Scanniello, Patricia Lago
- Abstract summary: We present Lacuna, an approach for automatically detecting and eliminating JavaScript dead code from web apps.
We conduct an experiment to empirically evaluate the run-time overhead of JavaScript dead code in terms of energy consumption, performance, network usage, and resource usage in the context of mobile web apps.
- Score: 13.566269406958966
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Web apps are built by using a combination of HTML, CSS, and JavaScript. While
building modern web apps, it is common practice to make use of third-party
libraries and frameworks, as to improve developers' productivity and code
quality. Alongside these benefits, the adoption of such libraries results in
the introduction of JavaScript dead code, i.e., code implementing unused
functionalities. The costs for downloading and parsing dead code can negatively
contribute to the loading time and resource usage of web apps. The goal of our
study is two-fold. First, we present Lacuna, an approach for automatically
detecting and eliminating JavaScript dead code from web apps. The proposed
approach supports both static and dynamic analyses, it is extensible and can be
applied to any JavaScript code base, without imposing constraints on the coding
style or on the use of specific JavaScript constructs. Secondly, by leveraging
Lacuna we conduct an experiment to empirically evaluate the run-time overhead
of JavaScript dead code in terms of energy consumption, performance, network
usage, and resource usage in the context of mobile web apps. We applied Lacuna
four times on 30 mobile web apps independently developed by third-party
developers, each time eliminating dead code according to a different
optimization level provided by Lacuna. Afterward, each different version of the
web app is executed on an Android device, while collecting measures to assess
the potential run-time overhead caused by dead code. Experimental results,
among others, highlight that the removal of JavaScript dead code has a positive
impact on the loading time of mobile web apps, while significantly reducing the
number of bytes transferred over the network.
Related papers
- Fakeium: A Dynamic Execution Environment for JavaScript Program Analysis [3.7980955101286322]
Fakeium is a novel, open source, and lightweight execution environment designed for efficient, large-scale dynamic analysis of JavaScript programs.
Fakeium complements traditional static analysis by providing additional API calls and string literals.
Fakeium's flexibility and ability to detect hidden API calls, especially in obfuscated sources, highlights its potential as a valuable tool for security analysts to detect malicious behavior.
arXiv Detail & Related papers (2024-10-28T09:27:26Z) - jscefr: A Framework to Evaluate the Code Proficiency for JavaScript [1.7174932174564534]
jscefr (pronounced jes-cee-fer) is a tool that detects the use of different elements of the JavaScript (JS) language.
jscefr categorizes JS code into six levels based on proficiency.
arXiv Detail & Related papers (2024-08-29T11:37:49Z) - Chain-of-Experts (CoE): Reverse Engineering Software Bills of Materials for JavaScript Application Bundles through Code Clone Search [5.474149892700497]
A Software Bill of Materials (SBoM) is a detailed inventory of all components, libraries, and modules in a software artifact.
JavaScript application bundles are a consolidated, symbol-stripped, and optimized assembly of code for deployment purpose.
Generating a SBoM from a JavaScript application bundle through a reverse-engineering process ensures the integrity, security, and compliance of the supplier's software release.
arXiv Detail & Related papers (2024-08-29T01:32:49Z) - Native vs Web Apps: Comparing the Energy Consumption and Performance of
Android Apps and their Web Counterparts [5.18539596100998]
We select 10 Internet content platforms across 5 categories.
We measure them based on the energy consumption, network traffic volume, CPU load, memory load, and frame time of their native and Web versions.
arXiv Detail & Related papers (2023-08-31T13:51:56Z) - InterCode: Standardizing and Benchmarking Interactive Coding with
Execution Feedback [50.725076393314964]
We introduce InterCode, a lightweight, flexible, and easy-to-use framework of interactive coding as a standard reinforcement learning environment.
Our framework is language and platform agnostic, uses self-contained Docker environments to provide safe and reproducible execution.
We demonstrate InterCode's viability as a testbed by evaluating multiple state-of-the-art LLMs configured with different prompting strategies.
arXiv Detail & Related papers (2023-06-26T17:59:50Z) - Neural Embeddings for Web Testing [49.66745368789056]
Existing crawlers rely on app-specific, threshold-based, algorithms to assess state equivalence.
We propose WEBEMBED, a novel abstraction function based on neural network embeddings and threshold-free classifiers.
Our evaluation on nine web apps shows that WEBEMBED outperforms state-of-the-art techniques by detecting near-duplicates more accurately.
arXiv Detail & Related papers (2023-06-12T19:59:36Z) - CONCORD: Clone-aware Contrastive Learning for Source Code [64.51161487524436]
Self-supervised pre-training has gained traction for learning generic code representations valuable for many downstream SE tasks.
We argue that it is also essential to factor in how developers code day-to-day for general-purpose representation learning.
In particular, we propose CONCORD, a self-supervised, contrastive learning strategy to place benign clones closer in the representation space while moving deviants further apart.
arXiv Detail & Related papers (2023-06-05T20:39:08Z) - A Static Evaluation of Code Completion by Large Language Models [65.18008807383816]
Execution-based benchmarks have been proposed to evaluate functional correctness of model-generated code on simple programming problems.
static analysis tools such as linters, which can detect errors without running the program, haven't been well explored for evaluating code generation models.
We propose a static evaluation framework to quantify static errors in Python code completions, by leveraging Abstract Syntax Trees.
arXiv Detail & Related papers (2023-06-05T19:23:34Z) - ReACC: A Retrieval-Augmented Code Completion Framework [53.49707123661763]
We propose a retrieval-augmented code completion framework, leveraging both lexical copying and referring to code with similar semantics by retrieval.
We evaluate our approach in the code completion task in Python and Java programming languages, achieving a state-of-the-art performance on CodeXGLUE benchmark.
arXiv Detail & Related papers (2022-03-15T08:25:08Z) - Contrastive Code Representation Learning [95.86686147053958]
We show that the popular reconstruction-based BERT model is sensitive to source code edits, even when the edits preserve semantics.
We propose ContraCode: a contrastive pre-training task that learns code functionality, not form.
arXiv Detail & Related papers (2020-07-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.