Monsters in the Dark: Sanitizing Hidden Threats with Diffusion Models
- URL: http://arxiv.org/abs/2310.06951v1
- Date: Tue, 10 Oct 2023 19:15:11 GMT
- Title: Monsters in the Dark: Sanitizing Hidden Threats with Diffusion Models
- Authors: Preston K. Robinette, Daniel Moyer, Taylor T. Johnson,
- Abstract summary: Steganography is the art of hiding information in plain sight.
Current image steganography defenses rely upon steganalysis, or the detection of hidden messages.
Recent work has focused on a defense mechanism known as sanitization, which eliminates hidden information from images.
- Score: 4.443677138272269
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Steganography is the art of hiding information in plain sight. This form of covert communication can be used by bad actors to propagate malware, exfiltrate victim data, and communicate with other bad actors. Current image steganography defenses rely upon steganalysis, or the detection of hidden messages. These methods, however, are non-blind as they require information about known steganography techniques and are easily bypassed. Recent work has instead focused on a defense mechanism known as sanitization, which eliminates hidden information from images. In this work, we introduce a novel blind deep learning steganography sanitization method that utilizes a diffusion model framework to sanitize universal and dependent steganography (DM-SUDS), which both sanitizes and preserves image quality. We evaluate this approach against state-of-the-art deep learning sanitization frameworks and provide further detailed analysis through an ablation study. DM-SUDS outperforms previous sanitization methods and improves image preservation MSE by 71.32%, PSNR by 22.43% and SSIM by 17.30%. This is the first blind deep learning image sanitization framework to meet these image quality results.
Related papers
- Missing Data Estimation for MR Spectroscopic Imaging via Mask-Free Deep Learning Methods [0.0]
We propose the first deep learning-based, mask-free framework for estimating missing data in MRSI metabolic maps.<n>Our model generalizes well to real-world datasets without requiring retraining or mask input.
arXiv Detail & Related papers (2025-05-11T01:56:26Z) - SMILENet: Unleashing Extra-Large Capacity Image Steganography via a Synergistic Mosaic InvertibLE Hiding Network [71.11351750072936]
We propose a novel synergistic framework that achieves 25 image hiding through three key innovations.
A network architecture coordinates reversible and non-reversible operations to efficiently exploit information redundancy in both secret and cover images.
A unified training strategy that coordinates complementary modules to achieve 3.0x higher capacity than existing methods with superior visual quality.
arXiv Detail & Related papers (2025-03-07T03:31:47Z) - Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image detectors [62.63467652611788]
We introduce SEMI-TRUTHS, featuring 27,600 real images, 223,400 masks, and 1,472,700 AI-augmented images.
Each augmented image is accompanied by metadata for standardized and targeted evaluation of detector robustness.
Our findings suggest that state-of-the-art detectors exhibit varying sensitivities to the types and degrees of perturbations, data distributions, and augmentation methods used.
arXiv Detail & Related papers (2024-11-12T01:17:27Z) - Tackling domain generalization for out-of-distribution endoscopic imaging [1.6377635288143584]
We exploit both style and content information in images to preserve robust and generalizable feature representations.
Our proposed method shows a 13.7% improvement over the baseline DeepLabv3+ and nearly an 8% improvement over recent state-of-the-art (SOTA) methods for the target (different modality) set of the EndoUDA polyp dataset.
arXiv Detail & Related papers (2024-10-18T18:45:13Z) - Natias: Neuron Attribution based Transferable Image Adversarial Steganography [62.906821876314275]
adversarial steganography has garnered considerable attention due to its ability to effectively deceive deep-learning-based steganalysis.
We propose a novel adversarial steganographic scheme named Natias.
Our proposed method can be seamlessly integrated with existing adversarial steganography frameworks.
arXiv Detail & Related papers (2024-09-08T04:09:51Z) - Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - DiNO-Diffusion. Scaling Medical Diffusion via Self-Supervised Pre-Training [0.0]
DiNO-Diffusion is a self-supervised method for training latent diffusion models (LDMs)
By eliminating the reliance on annotations, our training leverages over 868k unlabelled images from public chest X-Ray datasets.
It can be used to generate semantically-diverse synthetic datasets even from small data pools.
arXiv Detail & Related papers (2024-07-16T10:51:21Z) - Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method [4.303291247305105]
We improve the supervised contrastive learning method by leveraging both image-level labels and domain-specific augmentations to enhance model robustness.
We evaluate our method on the BreakHis dataset, which consists of breast cancer histopathology images.
This improvement corresponds to 93.63% absolute accuracy, highlighting the effectiveness of our approach in leveraging properties of data to learn more appropriate representation space.
arXiv Detail & Related papers (2024-05-06T17:06:11Z) - Diffusion Facial Forgery Detection [56.69763252655695]
This paper introduces DiFF, a comprehensive dataset dedicated to face-focused diffusion-generated images.
We conduct extensive experiments on the DiFF dataset via a human test and several representative forgery detection methods.
The results demonstrate that the binary detection accuracy of both human observers and automated detectors often falls below 30%.
arXiv Detail & Related papers (2024-01-29T03:20:19Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - SUDS: Sanitizing Universal and Dependent Steganography [4.067706508297839]
Steganography, or hiding messages in plain sight, is a form of information hiding that is most commonly used for covert communication.
Current protection mechanisms rely upon steganalysis, but these approaches are dependent upon prior knowledge.
This work focuses on a deep learning sanitization technique called SUDS that is able to sanitize universal and dependent steganography.
arXiv Detail & Related papers (2023-09-23T19:39:44Z) - Free-ATM: Exploring Unsupervised Learning on Diffusion-Generated Images
with Free Attention Masks [64.67735676127208]
Text-to-image diffusion models have shown great potential for benefiting image recognition.
Although promising, there has been inadequate exploration dedicated to unsupervised learning on diffusion-generated images.
We introduce customized solutions by fully exploiting the aforementioned free attention masks.
arXiv Detail & Related papers (2023-08-13T10:07:46Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Diffusion Models for Adversarial Purification [69.1882221038846]
Adrial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.
We propose DiffPure that uses diffusion models for adversarial purification.
Our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods.
arXiv Detail & Related papers (2022-05-16T06:03:00Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - Application of Homomorphic Encryption in Medical Imaging [60.51436886110803]
We show how HE can be used to make predictions over medical images while preventing unauthorized secondary use of data.
We report some experiments using 3D chest CT-Scans for a nodule detection task.
arXiv Detail & Related papers (2021-10-12T19:57:12Z) - Contrastive Learning with Continuous Proxy Meta-Data for 3D MRI
Classification [1.714108629548376]
We propose to leverage continuous proxy metadata, in the contrastive learning framework, by introducing a new loss called y-Aware InfoNCE loss.
A 3D CNN model pre-trained on $104$ multi-site healthy brain MRI scans can extract relevant features for three classification tasks.
When fine-tuned, it also outperforms 3D CNN trained from scratch on these tasks, as well as state-of-the-art self-supervised methods.
arXiv Detail & Related papers (2021-06-16T14:17:04Z) - Analysis of Macula on Color Fundus Images Using Heightmap Reconstruction
Through Deep Learning [5.935761705025763]
We propose a novel architecture for the generator which enhances the details and the quality of output by progressive refinement and the use of deep supervision.
The proposed method can provide additional information for ophthalmologists for diagnosis.
arXiv Detail & Related papers (2020-12-28T08:21:55Z) - Contextual Fusion For Adversarial Robustness [0.0]
Deep neural networks are usually designed to process one particular information stream and susceptible to various types of adversarial perturbations.
We developed a fusion model using a combination of background and foreground features extracted in parallel from Places-CNN and Imagenet-CNN.
For gradient based attacks, our results show that fusion allows for significant improvements in classification without decreasing performance on unperturbed data.
arXiv Detail & Related papers (2020-11-18T20:13:23Z) - FocalMix: Semi-Supervised Learning for 3D Medical Image Detection [24.058713299186845]
We propose a novel method, called FocalMix, which is the first to leverage recent advances in semi-supervised learning (SSL) for 3D medical image detection.
Results show that our proposed SSL methods can achieve a substantial improvement of up to 17.3% over state-of-the-art supervised learning approaches with 400 unlabeled CT scans.
arXiv Detail & Related papers (2020-03-20T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.