Maven-Hijack: Software Supply Chain Attack Exploiting Packaging Order
- URL: http://arxiv.org/abs/2407.18760v4
- Date: Wed, 29 Oct 2025 22:49:20 GMT
- Title: Maven-Hijack: Software Supply Chain Attack Exploiting Packaging Order
- Authors: Frank Reyes, Federico Bono, Aman Sharma, Benoit Baudry, Martin Monperrus,
- Abstract summary: We present Maven-Hijack, a novel attack that exploits the order in which Maven packages dependencies.<n>By injecting a malicious class with the same fully qualified name as a legitimate one into a dependency that is packaged earlier, an attacker can silently override core application behavior.<n>We evaluate three mitigation strategies, such as sealed JARs, Java Modules, and the Maven Enforcer plugin.
- Score: 9.51794475707891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Java projects frequently rely on package managers such as Maven to manage complex webs of external dependencies. While these tools streamline development, they also introduce subtle risks to the software supply chain. In this paper, we present Maven-Hijack, a novel attack that exploits the order in which Maven packages dependencies and the way the Java Virtual Machine resolves classes at runtime. By injecting a malicious class with the same fully qualified name as a legitimate one into a dependency that is packaged earlier, an attacker can silently override core application behavior without modifying the main codebase or library names. We demonstrate the real-world feasibility of this attack by compromising the Corona-Warn-App, a widely used open-source COVID-19 contact tracing system, and gaining control over its database connection logic. We evaluate three mitigation strategies, such as sealed JARs, Java Modules, and the Maven Enforcer plugin. Our results show that, while Java Modules offer strong protection, the Maven Enforcer plugin with duplicate class detection provides the most practical and effective defense for current Java projects. These findings highlight the urgent need for improved safeguards in Java's build and dependency management processes to prevent stealthy supply chain attacks.
Related papers
- Classport: Designing Runtime Dependency Introspection for Java [8.337857900646346]
introspection, i.e., the ability to observe which dependencies are currently used during program execution is fundamental for Software Supply Chain security.<n>We solve this problem with Classport, a system that embeds dependency information into Java class files, enabling the retrieval of dependency information at runtime.<n>We evaluate Classport on six real-world projects, demonstrating the feasibility in identifying dependencies at runtime.
arXiv Detail & Related papers (2025-10-23T08:39:30Z) - Maven-Lockfile: High Integrity Rebuild of Past Java Releases [8.004632448033531]
Maven is one of the most important package managers in the Java ecosystem.<n>We present Maven-Lockfile to generate and update lockfiles with support for rebuilding projects from past versions.<n>Our evaluation shows that Maven-Lockfile can reproduce builds from historical commits and is able to detect tampered artifacts.
arXiv Detail & Related papers (2025-10-01T10:14:32Z) - Cuckoo Attack: Stealthy and Persistent Attacks Against AI-IDE [64.47951172662745]
Cuckoo Attack is a novel attack that achieves stealthy and persistent command execution by embedding malicious payloads into configuration files.<n>We formalize our attack paradigm into two stages, including initial infection and persistence.<n>We contribute seven actionable checkpoints for vendors to evaluate their product security.
arXiv Detail & Related papers (2025-09-19T04:10:52Z) - Unlocking Reproducibility: Automating re-Build Process for Open-Source Software [0.06124773188525717]
Software ecosystems like Maven Central play a crucial role in modern software supply chains.<n>Approximately 84% of the top 1200 commonly used artifacts are not built using a transparent CI/CD pipeline.<n>We introduce an extension to Maven, an industry-grade open-source supply chain security framework, to automate the rebuilding of Maven artifacts from source.
arXiv Detail & Related papers (2025-09-10T00:23:08Z) - Defending Against Prompt Injection With a Few DefensiveTokens [53.7493897456957]
Large language model (LLM) systems interact with external data to perform complex tasks.<n>By injecting instructions into the data accessed by the system, an attacker can override the initial user task with an arbitrary task directed by the attacker.<n>Test-time defenses, e.g., defensive prompting, have been proposed for system developers to attain security only when needed in a flexible manner.<n>We propose DefensiveToken, a test-time defense with prompt injection comparable to training-time alternatives.
arXiv Detail & Related papers (2025-07-10T17:51:05Z) - JavaSith: A Client-Side Framework for Analyzing Potentially Malicious Extensions in Browsers, VS Code, and NPM Packages [0.0]
JavaSith is a novel framework for analyzing potentially malicious extensions in web browsers, Visual Studio Code (VSCode), and Node's NPM packages.<n>We present the design and architecture of JavaSith, including techniques for intercepting extension behavior over simulated time.<n>We demonstrate how JavaSith can catch stealthy malicious behaviors that evade traditional detection.
arXiv Detail & Related papers (2025-05-27T14:40:25Z) - Canonicalization for Unreproducible Builds in Java [11.367562045401554]
We introduce a conceptual framework for reproducible builds, analyze a large dataset from Reproducible Central, and develop a novel taxonomy of six root causes of unreproducibility.
We present Chains-Rebuild, a tool that raises success from 9.48% to 26.89% on 12,283 unreproducible artifacts.
arXiv Detail & Related papers (2025-04-30T14:17:54Z) - Sleeping Giants -- Activating Dormant Java Deserialization Gadget Chains through Stealthy Code Changes [42.95491588006701]
Java deserialization gadget chains are a well-researched critical software weakness.
Small code changes in dependencies have enabled these gadget chains.
This work shows that Java deserialization gadget chains are a broad liability to software and proves dormant gadget chains as a lucrative supply chain attack vector.
arXiv Detail & Related papers (2025-04-29T07:24:34Z) - DoomArena: A framework for Testing AI Agents Against Evolving Security Threats [81.73540246946015]
We present DoomArena, a security evaluation framework for AI agents.<n>It is a plug-in framework and integrates easily into realistic agentic frameworks.<n>It is modular and decouples the development of attacks from details of the environment in which the agent is deployed.
arXiv Detail & Related papers (2025-04-18T20:36:10Z) - Deserialization Gadget Chains are not a Pathological Problem in Android:an In-Depth Study of Java Gadget Chains in AOSP [40.53819791643813]
Java's Serializable API has a long history of deserialization vulnerabilities, specifically deserialization gadget chains.
We design a gadget chain detection tool optimized for soundness and efficiency.
Running our tool on the Android SDK and 1,200 Android dependencies, in combination with a comprehensive sink dataset, yields no security-critical gadget chains.
arXiv Detail & Related papers (2025-02-12T14:39:30Z) - Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense [55.77152277982117]
We introduce Layer-AdvPatcher, a methodology designed to defend against jailbreak attacks.
We use an unlearning strategy to patch specific layers within large language models through self-augmented datasets.
Our framework reduces the harmfulness and attack success rate of jailbreak attacks.
arXiv Detail & Related papers (2025-01-05T19:06:03Z) - BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger [67.75420257197186]
In this work, we propose $textbfBaThe, a simple yet effective jailbreak defense mechanism.
Jailbreak backdoor attack uses harmful instructions combined with manually crafted strings as triggers to make the backdoored model generate prohibited responses.
We assume that harmful instructions can function as triggers, and if we alternatively set rejection responses as the triggered response, the backdoored model then can defend against jailbreak attacks.
arXiv Detail & Related papers (2024-08-17T04:43:26Z) - Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks [27.11523234556414]
We propose a plug-and-play and easy-to-deploy jailbreak defense framework, namely Prefix Guidance (PG)
PG guides the model to identify harmful prompts by directly setting the first few tokens of the model's output.
We demonstrate the effectiveness of PG across three models and five attack methods.
arXiv Detail & Related papers (2024-08-15T14:51:32Z) - SBOM.EXE: Countering Dynamic Code Injection based on Software Bill of Materials in Java [10.405775369526006]
Software supply chain attacks have become a significant threat.
Traditional safeguards can mitigate supply chain attacks at build time, but they have limitations in mitigating runtime threats.
This paper introduces SBOM.EXE, a proactive system designed to safeguard Java applications against such threats.
arXiv Detail & Related papers (2024-06-28T22:08:17Z) - WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models [66.34505141027624]
We introduce WildTeaming, an automatic LLM safety red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics.
WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in up to 4.6x more diverse and successful adversarial attacks.
arXiv Detail & Related papers (2024-06-26T17:31:22Z) - AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens [83.08119913279488]
We present a systematic analysis of the dependency relationships in jailbreak attack and defense techniques.
We propose three comprehensive, automated, and logical frameworks.
We show that the proposed ensemble jailbreak attack and defense framework significantly outperforms existing research.
arXiv Detail & Related papers (2024-06-06T07:24:41Z) - AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting [54.931241667414184]
We propose textbfAdaptive textbfShield Prompting, which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks.
Our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks.
arXiv Detail & Related papers (2024-03-14T15:57:13Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - TextGuard: Provable Defense against Backdoor Attacks on Text
Classification [83.94014844485291]
We propose TextGuard, the first provable defense against backdoor attacks on text classification.
In particular, TextGuard divides the (backdoored) training data into sub-training sets, achieved by splitting each training sentence into sub-sentences.
In our evaluation, we demonstrate the effectiveness of TextGuard on three benchmark text classification tasks.
arXiv Detail & Related papers (2023-11-19T04:42:16Z) - Streamlining Attack Tree Generation: A Fragment-Based Approach [39.157069600312774]
We present a novel fragment-based attack graph generation approach that utilizes information from publicly available information security databases.
We also propose a domain-specific language for attack modeling, which we employ in the proposed attack graph generation approach.
arXiv Detail & Related papers (2023-10-01T12:41:38Z) - Analyzing Maintenance Activities of Software Libraries [55.2480439325792]
Industrial applications heavily integrate open-source software libraries nowadays.<n>I want to introduce an automatic monitoring approach for industrial applications to identify open-source dependencies that show negative signs regarding their current or future maintenance activities.
arXiv Detail & Related papers (2023-06-09T16:51:25Z) - Not what you've signed up for: Compromising Real-World LLM-Integrated
Applications with Indirect Prompt Injection [64.67495502772866]
Large Language Models (LLMs) are increasingly being integrated into various applications.
We show how attackers can override original instructions and employed controls using Prompt Injection attacks.
We derive a comprehensive taxonomy from a computer security perspective to systematically investigate impacts and vulnerabilities.
arXiv Detail & Related papers (2023-02-23T17:14:38Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Unleashing the Tiger: Inference Attacks on Split Learning [2.492607582091531]
We introduce general attack strategies targeting the reconstruction of clients' private training sets.
A malicious server can actively hijack the learning process of the distributed model.
We demonstrate our attack is able to overcome recently proposed defensive techniques.
arXiv Detail & Related papers (2020-12-04T15:41:00Z) - Poisoned classifiers are not only backdoored, they are fundamentally
broken [84.67778403778442]
Under a commonly-studied backdoor poisoning attack against classification models, an attacker adds a small trigger to a subset of the training data.
It is often assumed that the poisoned classifier is vulnerable exclusively to the adversary who possesses the trigger.
In this paper, we show empirically that this view of backdoored classifiers is incorrect.
arXiv Detail & Related papers (2020-10-18T19:42:44Z) - Backdoor Attacks on Federated Meta-Learning [0.225596179391365]
We analyze the effects of backdoor attacks on federated meta-learning.
We propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features.
arXiv Detail & Related papers (2020-06-12T09:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.