Java-Class-Hijack: Software Supply Chain Attack for Java based on Maven Dependency Resolution and Java Classloading
- URL: http://arxiv.org/abs/2407.18760v2
- Date: Wed, 21 Aug 2024 15:41:42 GMT
- Title: Java-Class-Hijack: Software Supply Chain Attack for Java based on Maven Dependency Resolution and Java Classloading
- Authors: Federico Bono, Frank Reyes, Aman Sharma, Benoit Baudry, Martin Monperrus,
- Abstract summary: Java-Class-Hijack enables an attacker to inject malicious code by crafting a class that shadows a legitimate class that is in the dependency tree.
We describe the attack, provide a proof-of-concept demonstrating its feasibility, and replicate it in the German Corona-Warn-App application.
- Score: 9.699225997570384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Java-Class-Hijack, a novel software supply chain attack that enables an attacker to inject malicious code by crafting a class that shadows a legitimate class that is in the dependency tree. We describe the attack, provide a proof-of-concept demonstrating its feasibility, and replicate it in the German Corona-Warn-App server application. The proof-of-concept illustrates how a transitive dependency deep within the dependency tree can hijack a class from a direct dependency and entirely alter its behavior, posing a significant security risk to Java applications. The replication on the Corona-Warn-App demonstrates how compromising a small JSON validation library could result in a complete database takeover.
Related papers
- Canonicalization for Unreproducible Builds in Java [11.367562045401554]
We introduce a conceptual framework for reproducible builds, analyze a large dataset from Reproducible Central, and develop a novel taxonomy of six root causes of unreproducibility.
We present Chains-Rebuild, a tool that raises success from 9.48% to 26.89% on 12,283 unreproducible artifacts.
arXiv Detail & Related papers (2025-04-30T14:17:54Z) - Sleeping Giants -- Activating Dormant Java Deserialization Gadget Chains through Stealthy Code Changes [42.95491588006701]
Java deserialization gadget chains are a well-researched critical software weakness.
Small code changes in dependencies have enabled these gadget chains.
This work shows that Java deserialization gadget chains are a broad liability to software and proves dormant gadget chains as a lucrative supply chain attack vector.
arXiv Detail & Related papers (2025-04-29T07:24:34Z) - Deserialization Gadget Chains are not a Pathological Problem in Android:an In-Depth Study of Java Gadget Chains in AOSP [40.53819791643813]
Java's Serializable API has a long history of deserialization vulnerabilities, specifically deserialization gadget chains.
We design a gadget chain detection tool optimized for soundness and efficiency.
Running our tool on the Android SDK and 1,200 Android dependencies, in combination with a comprehensive sink dataset, yields no security-critical gadget chains.
arXiv Detail & Related papers (2025-02-12T14:39:30Z) - Layer-Level Self-Exposure and Patch: Affirmative Token Mitigation for Jailbreak Attack Defense [55.77152277982117]
We introduce Layer-AdvPatcher, a methodology designed to defend against jailbreak attacks.
We use an unlearning strategy to patch specific layers within large language models through self-augmented datasets.
Our framework reduces the harmfulness and attack success rate of jailbreak attacks.
arXiv Detail & Related papers (2025-01-05T19:06:03Z) - BaThe: Defense against the Jailbreak Attack in Multimodal Large Language Models by Treating Harmful Instruction as Backdoor Trigger [67.75420257197186]
In this work, we propose $textbfBaThe, a simple yet effective jailbreak defense mechanism.
Jailbreak backdoor attack uses harmful instructions combined with manually crafted strings as triggers to make the backdoored model generate prohibited responses.
We assume that harmful instructions can function as triggers, and if we alternatively set rejection responses as the triggered response, the backdoored model then can defend against jailbreak attacks.
arXiv Detail & Related papers (2024-08-17T04:43:26Z) - Prefix Guidance: A Steering Wheel for Large Language Models to Defend Against Jailbreak Attacks [27.11523234556414]
We propose a plug-and-play and easy-to-deploy jailbreak defense framework, namely Prefix Guidance (PG)
PG guides the model to identify harmful prompts by directly setting the first few tokens of the model's output.
We demonstrate the effectiveness of PG across three models and five attack methods.
arXiv Detail & Related papers (2024-08-15T14:51:32Z) - SBOM.EXE: Countering Dynamic Code Injection based on Software Bill of Materials in Java [10.405775369526006]
Software supply chain attacks have become a significant threat.
Traditional safeguards can mitigate supply chain attacks at build time, but they have limitations in mitigating runtime threats.
This paper introduces SBOM.EXE, a proactive system designed to safeguard Java applications against such threats.
arXiv Detail & Related papers (2024-06-28T22:08:17Z) - WildTeaming at Scale: From In-the-Wild Jailbreaks to (Adversarially) Safer Language Models [66.34505141027624]
We introduce WildTeaming, an automatic LLM safety red-teaming framework that mines in-the-wild user-chatbot interactions to discover 5.7K unique clusters of novel jailbreak tactics.
WildTeaming reveals previously unidentified vulnerabilities of frontier LLMs, resulting in up to 4.6x more diverse and successful adversarial attacks.
arXiv Detail & Related papers (2024-06-26T17:31:22Z) - AutoJailbreak: Exploring Jailbreak Attacks and Defenses through a Dependency Lens [83.08119913279488]
We present a systematic analysis of the dependency relationships in jailbreak attack and defense techniques.
We propose three comprehensive, automated, and logical frameworks.
We show that the proposed ensemble jailbreak attack and defense framework significantly outperforms existing research.
arXiv Detail & Related papers (2024-06-06T07:24:41Z) - Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks [48.70867241987739]
InferGuard is a novel Byzantine-robust aggregation rule aimed at defending against client-side training data distribution inference attacks.
The results of our experiments indicate that our defense mechanism is highly effective in protecting against client-side training data distribution inference attacks.
arXiv Detail & Related papers (2024-03-05T17:41:35Z) - TextGuard: Provable Defense against Backdoor Attacks on Text
Classification [83.94014844485291]
We propose TextGuard, the first provable defense against backdoor attacks on text classification.
In particular, TextGuard divides the (backdoored) training data into sub-training sets, achieved by splitting each training sentence into sub-sentences.
In our evaluation, we demonstrate the effectiveness of TextGuard on three benchmark text classification tasks.
arXiv Detail & Related papers (2023-11-19T04:42:16Z) - Streamlining Attack Tree Generation: A Fragment-Based Approach [39.157069600312774]
We present a novel fragment-based attack graph generation approach that utilizes information from publicly available information security databases.
We also propose a domain-specific language for attack modeling, which we employ in the proposed attack graph generation approach.
arXiv Detail & Related papers (2023-10-01T12:41:38Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - Unleashing the Tiger: Inference Attacks on Split Learning [2.492607582091531]
We introduce general attack strategies targeting the reconstruction of clients' private training sets.
A malicious server can actively hijack the learning process of the distributed model.
We demonstrate our attack is able to overcome recently proposed defensive techniques.
arXiv Detail & Related papers (2020-12-04T15:41:00Z) - Poisoned classifiers are not only backdoored, they are fundamentally
broken [84.67778403778442]
Under a commonly-studied backdoor poisoning attack against classification models, an attacker adds a small trigger to a subset of the training data.
It is often assumed that the poisoned classifier is vulnerable exclusively to the adversary who possesses the trigger.
In this paper, we show empirically that this view of backdoored classifiers is incorrect.
arXiv Detail & Related papers (2020-10-18T19:42:44Z) - Backdoor Attacks on Federated Meta-Learning [0.225596179391365]
We analyze the effects of backdoor attacks on federated meta-learning.
We propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features.
arXiv Detail & Related papers (2020-06-12T09:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.