gh0stEdit: Exploiting Layer-Based Access Vulnerability Within Docker Container Images
- URL: http://arxiv.org/abs/2506.08218v1
- Date: Mon, 09 Jun 2025 20:38:17 GMT
- Title: gh0stEdit: Exploiting Layer-Based Access Vulnerability Within Docker Container Images
- Authors: Alan Mills, Jonathan White, Phil Legg,
- Abstract summary: We present gh0stEdit, a vulnerability that undermines the integrity of Docker images.<n>gh0stEdit allows an attacker to maliciously edit Docker images, in a way that is not shown within the image history.<n>We present two use case studies for this vulnerability, and showcase how gh0stEdit is able to poison an image in a way that is not picked up through static or dynamic scanning tools.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Containerisation is a popular deployment process for application-level virtualisation using a layer-based approach. Docker is a leading provider of containerisation, and through the Docker Hub, users can supply Docker images for sharing and re-purposing popular software application containers. Using a combination of in-built inspection commands, publicly displayed image layer content, and static image scanning, Docker images are designed to ensure end users can clearly assess the content of the image before running them. In this paper we present \textbf{\textit{gh0stEdit}}, a vulnerability that fundamentally undermines the integrity of Docker images and subverts the assumed trust and transparency they utilise. The use of gh0stEdit allows an attacker to maliciously edit Docker images, in a way that is not shown within the image history, hierarchy or commands. This attack can also be carried out against signed images (Docker Content Trust) without invalidating the image signature. We present two use case studies for this vulnerability, and showcase how gh0stEdit is able to poison an image in a way that is not picked up through static or dynamic scanning tools. Our attack case studies highlight the issues in the current approach to Docker image security and trust, and expose an attack method which could potentially be exploited in the wild without being detected. To the best of our knowledge we are the first to provide detailed discussion on the exploit of this vulnerability.
Related papers
- Toward Automated Test Generation for Dockerfiles Based on Analysis of Docker Image Layers [1.1879716317856948]
The process for building a Docker image is defined in a text file called a Dockerfile.<n>A Dockerfile can be considered as a kind of source code that contains instructions on how to build a Docker image.<n>We propose an automated test generation method for Dockerfiles based on processing results rather than processing steps.
arXiv Detail & Related papers (2025-04-25T08:02:46Z) - Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing [71.30876587855867]
We show that even clean query images can be dangerous, inducing malicious target retrieval results, like undesired or illegal images.<n>Specifically, we first train a surrogate model to simulate the behavior of the target deep hashing model.<n>Then, a strict gradient matching strategy is proposed to generate the poisoned images.
arXiv Detail & Related papers (2025-03-27T07:54:27Z) - SEAL: Semantic Aware Image Watermarking [26.606008778795193]
We propose a novel watermarking method that embeds semantic information about the generated image directly into the watermark.<n>The key pattern can be inferred from the semantic embedding of the image using locality-sensitive hashing.<n>Our results suggest that content-aware watermarks can mitigate risks arising from image-generative models.
arXiv Detail & Related papers (2025-03-15T15:29:05Z) - An Effective Docker Image Slimming Approach Based on Source Code Data Dependency Analysis [11.488840420390394]
This paper presents a novel image slimming model named delta-SCALPEL.<n>It employs static data dependency analysis to extract the environment dependencies of the project code.<n>It can reduce image sizes by up to 61.4% while ensuring the normal operation of these projects.
arXiv Detail & Related papers (2025-01-07T12:28:57Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - Perceptive self-supervised learning network for noisy image watermark
removal [59.440951785128995]
We propose a perceptive self-supervised learning network for noisy image watermark removal (PSLNet)
Our proposed method is very effective in comparison with popular convolutional neural networks (CNNs) for noisy image watermark removal.
arXiv Detail & Related papers (2024-03-04T16:59:43Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - DRIVE: Dockerfile Rule Mining and Violation Detection [6.510749313511299]
A Dockerfile defines a set of instructions to build Docker images, which can then be instantiated to support containerized applications.
Recent studies have revealed a considerable amount of quality issues with Dockerfiles.
We propose a novel approach to mine implicit rules and detect potential violations of such rules in Dockerfiles.
arXiv Detail & Related papers (2022-12-12T01:15:30Z) - Breaking certified defenses: Semantic adversarial examples with spoofed
robustness certificates [57.52763961195292]
We present a new attack that exploits not only the labelling function of a classifier, but also the certificate generator.
The proposed method applies large perturbations that place images far from a class boundary while maintaining the imperceptibility property of adversarial examples.
arXiv Detail & Related papers (2020-03-19T17:59:44Z) - Privacy-Preserving Image Sharing via Sparsifying Layers on Convolutional
Groups [11.955557264002204]
We propose a practical framework to address the problem of privacy-aware image sharing in large-scale setups.
We encode images, such that, from one hand, representations are stored in the public domain without paying the huge cost of privacy protection.
authorized users are provided with very compact keys that can easily be kept secure.
This can be used to disambiguate and faithfully reconstruct the corresponding access-granted images.
arXiv Detail & Related papers (2020-02-04T18:54:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.