Chain-of-Experts (CoE): Reverse Engineering Software Bills of Materials for JavaScript Application Bundles through Code Clone Search
- URL: http://arxiv.org/abs/2408.16198v1
- Date: Thu, 29 Aug 2024 01:32:49 GMT
- Title: Chain-of-Experts (CoE): Reverse Engineering Software Bills of Materials for JavaScript Application Bundles through Code Clone Search
- Authors: Leo Song, Steven H. H. Ding, Yuan Tian, Li Tao Li, Philippe Charland, Andrew Walenstein,
- Abstract summary: A Software Bill of Materials (SBoM) is a detailed inventory of all components, libraries, and modules in a software artifact.
JavaScript application bundles are a consolidated, symbol-stripped, and optimized assembly of code for deployment purpose.
Generating a SBoM from a JavaScript application bundle through a reverse-engineering process ensures the integrity, security, and compliance of the supplier's software release.
- Score: 5.474149892700497
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A Software Bill of Materials (SBoM) is a detailed inventory of all components, libraries, and modules in a software artifact, providing traceability throughout the software supply chain. With the increasing popularity of JavaScript in software engineering due to its dynamic syntax and seamless supply chain integration, the exposure to vulnerabilities and attacks has risen significantly. A JavaScript application bundle, which is a consolidated, symbol-stripped, and optimized assembly of code for deployment purpose. Generating a SBoM from a JavaScript application bundle through a reverse-engineering process ensures the integrity, security, and compliance of the supplier's software release, even without access to the original dependency graphs. This paper presents the first study on SBoM generation for JavaScript application bundles. We identify three key challenges for this task, i.e., nested code scopes, extremely long sequences, and large retrieval spaces. To address these challenges, we introduce Chain-of-Experts (CoE), a multi-task deep learning model designed to generate SBoMs through three tasks: code segmentation, code classification, and code clone retrieval. We evaluate CoE against individual task-specific solutions on 500 web application bundles with over 66,000 dependencies. Our experimental results demonstrate that CoE offers competitive outcomes with less training and inference time when compared with combined individual task-specific solutions. Consequently, CoE provides the first scalable, efficient, and end-to-end solution for the SBoM generation of real-world JavaScript application bundles.
Related papers
- SnipGen: A Mining Repository Framework for Evaluating LLMs for Code [51.07471575337676]
Language Models (LLMs) are trained on extensive datasets that include code repositories.
evaluating their effectiveness poses significant challenges due to the potential overlap between the datasets used for training and those employed for evaluation.
We introduce SnipGen, a comprehensive repository mining framework designed to leverage prompt engineering across various downstream tasks for code generation.
arXiv Detail & Related papers (2025-02-10T21:28:15Z) - SWE-Fixer: Training Open-Source LLMs for Effective and Efficient GitHub Issue Resolution [56.9361004704428]
Large Language Models (LLMs) have demonstrated remarkable proficiency across a variety of complex tasks.
SWE-Fixer is a novel open-source framework designed to effectively and efficiently resolve GitHub issues.
We assess our approach on the SWE-Bench Lite and Verified benchmarks, achieving state-of-the-art performance among open-source models.
arXiv Detail & Related papers (2025-01-09T07:54:24Z) - VersiCode: Towards Version-controllable Code Generation [58.82709231906735]
Large Language Models (LLMs) have made tremendous strides in code generation, but existing research fails to account for the dynamic nature of software development.
We propose two novel tasks aimed at bridging this gap: version-specific code completion (VSCC) and version-aware code migration (VACM)
We conduct an extensive evaluation on VersiCode, which reveals that version-controllable code generation is indeed a significant challenge.
arXiv Detail & Related papers (2024-06-11T16:15:06Z) - StepCoder: Improve Code Generation with Reinforcement Learning from
Compiler Feedback [58.20547418182074]
We introduce StepCoder, a novel framework for code generation, consisting of two main components.
CCCS addresses the exploration challenge by breaking the long sequences code generation task into a Curriculum of Code Completion Subtasks.
FGO only optimize the model by masking the unexecuted code segments to provide Fine-Grained Optimization.
Our method improves the ability to explore the output space and outperforms state-of-the-art approaches in corresponding benchmarks.
arXiv Detail & Related papers (2024-02-02T13:14:31Z) - CodeChain: Towards Modular Code Generation Through Chain of Self-revisions with Representative Sub-modules [51.82044734879657]
We propose CodeChain, a novel framework for inference that elicits modularized code generation through a chain of self-revisions.
We find that CodeChain can significantly boost both modularity as well as correctness of the generated solutions, achieving relative pass@1 improvements of 35% on APPS and 76% on CodeContests.
arXiv Detail & Related papers (2023-10-13T10:17:48Z) - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized
Toolsets [75.64181719386497]
We present CRAFT, a tool creation and retrieval framework for large language models (LLMs)
It creates toolsets specifically curated for the tasks and equips LLMs with a component that retrieves tools from these sets to enhance their capability to solve complex tasks.
Our method is designed to be flexible and offers a plug-and-play approach to adapt off-the-shelf LLMs to unseen domains and modalities, without any finetuning.
arXiv Detail & Related papers (2023-09-29T17:40:26Z) - AskIt: Unified Programming Interface for Programming with Large Language
Models [0.0]
Large Language Models (LLMs) exhibit a unique phenomenon known as emergent abilities, demonstrating adeptness across numerous tasks.
This paper introduces AskIt, a domain-specific language specifically designed for LLMs.
Across 50 tasks, AskIt generated concise prompts, achieving a 16.14 % reduction in prompt length compared to benchmarks.
arXiv Detail & Related papers (2023-08-29T21:44:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.