Stop Stealing My Data: Sanitizing Stego Channels in 3D Printing Design Files
- URL: http://arxiv.org/abs/2404.05106v1
- Date: Sun, 7 Apr 2024 23:28:35 GMT
- Title: Stop Stealing My Data: Sanitizing Stego Channels in 3D Printing Design Files
- Authors: Aleksandr Dolgavin, Mark Yampolskiy, Moti Yung,
- Abstract summary: steganographic channels can allow additional data to be embedded within the STL files without changing the printed model.
This paper addresses this security threat by designing and evaluating a emphsanitizer that erases hidden content where steganographic channels might exist.
- Score: 56.96539046813698
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increased adoption of additive manufacturing (AM) and the acceptance of AM outsourcing created an ecosystem in which the sending and receiving of digital designs by different actors became normal. It has recently been shown that the STL design files -- most commonly used in AM -- contain steganographic channels. Such channels can allow additional data to be embedded within the STL files without changing the printed model. These factors create a threat of misusing the design files as a covert communication channel to either exfiltrate stolen sensitive digital data from organizations or infiltrate malicious software into a secure environment. This paper addresses this security threat by designing and evaluating a \emph{sanitizer} that erases hidden content where steganographic channels might exist. The proposed sanitizer takes into account a set of specific constraints imposed by the application domain, such as not affecting the ability to manufacture part of the required quality using the sanitized design.
Related papers
- Propelling Innovation to Defeat Data-Leakage Hardware Trojans: From Theory to Practice [0.0]
Many companies have gone fabless and rely on external fabrication facilities to produce chips due to increasing cost of semiconductor manufacturing.
Some may inject hardware Trojans and jeopardize the security of the system.
One common objective of hardware Trojans is to establish a side channel for data leakage.
arXiv Detail & Related papers (2024-09-30T16:51:30Z) - ResiLogic: Leveraging Composability and Diversity to Design Fault and Intrusion Resilient Chips [0.7499722271664147]
This paper addresses a threat model considering three pertinent attacks to resilience: distribution, zonal, and compound attacks.
We introduce the textttResiLogic framework that exploits textitDiversity by Composability: constructing diverse circuits composed of smaller diverse ones by design.
Using this approach at different levels of granularity is shown to improve the resilience of circuit design in textttResiLogic against the three considered attacks by a factor of five.
arXiv Detail & Related papers (2024-09-04T09:18:43Z) - Transferable Watermarking to Self-supervised Pre-trained Graph Encoders by Trigger Embeddings [43.067822791795095]
Graph Self-supervised Learning (GSSL) enables to pre-train foundation graph encoders.
Easy-to-plug-in nature of such encoders makes them vulnerable to copyright infringement.
We develop a novel watermarking framework to protect graph encoders in GSSL settings.
arXiv Detail & Related papers (2024-06-19T03:16:11Z) - AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA [67.68750063537482]
Diffusion models have achieved remarkable success in generating high-quality images.
Recent works aim to let SD models output watermarked content for post-hoc forensics.
We propose textttmethod as the first implementation under this scenario.
arXiv Detail & Related papers (2024-05-18T01:25:47Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - Degeneration-Tuning: Using Scrambled Grid shield Unwanted Concepts from
Stable Diffusion [106.42918868850249]
We propose a novel strategy named textbfDegeneration-Tuning (DT) to shield contents of unwanted concepts from SD weights.
As this adaptation occurs at the level of the model's weights, the SD, after DT, can be grafted onto other conditional diffusion frameworks like ControlNet to shield unwanted concepts.
arXiv Detail & Related papers (2023-08-02T03:34:44Z) - Are You Copying My Model? Protecting the Copyright of Large Language
Models for EaaS via Backdoor Watermark [58.60940048748815]
Companies have begun to offer Embedding as a Service (E) based on large language models (LLMs)
E is vulnerable to model extraction attacks, which can cause significant losses for the owners of LLMs.
We propose an Embedding Watermark method called EmbMarker that implants backdoors on embeddings.
arXiv Detail & Related papers (2023-05-17T08:28:54Z) - Channel Leakage, Information-Theoretic Limitations of Obfuscation, and
Optimal Privacy Mask Design for Streaming Data [23.249999313567624]
We first introduce the notion of channel leakage as the minimum mutual information between the channel input and channel output.
In a broad sense, it can be viewed as a dual concept of channel capacity, which characterizes the maximum information transmission to the targeted receiver.
We then utilize this notion to investigate the fundamental limitations of obfuscation in terms of privacy-distortion tradeoffs.
arXiv Detail & Related papers (2020-08-11T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.