GIFDL: Generated Image Fluctuation Distortion Learning for Enhancing Steganographic Security
- URL: http://arxiv.org/abs/2504.15139v1
- Date: Mon, 21 Apr 2025 14:43:00 GMT
- Title: GIFDL: Generated Image Fluctuation Distortion Learning for Enhancing Steganographic Security
- Authors: Xiangkun Wang, Kejiang Chen, Yuang Qi, Ruiheng Liu, Weiming Zhang, Nenghai Yu,
- Abstract summary: We propose GIFDL, a steganographic distortion learning method based on the fluctuations in generated images.<n>GIFDL exhibits superior resistance to steganalysis, increasing the detection error rates by an average of 3.30% across three steganalyzers.
- Score: 59.863152942470784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Minimum distortion steganography is currently the mainstream method for modification-based steganography. A key issue in this method is how to define steganographic distortion. With the rapid development of deep learning technology, the definition of distortion has evolved from manual design to deep learning design. Concurrently, rapid advancements in image generation have made generated images viable as cover media. However, existing distortion design methods based on machine learning do not fully leverage the advantages of generated cover media, resulting in suboptimal security performance. To address this issue, we propose GIFDL (Generated Image Fluctuation Distortion Learning), a steganographic distortion learning method based on the fluctuations in generated images. Inspired by the idea of natural steganography, we take a series of highly similar fluctuation images as the input to the steganographic distortion generator and introduce a new GAN training strategy to disguise stego images as fluctuation images. Experimental results demonstrate that GIFDL, compared with state-of-the-art GAN-based distortion learning methods, exhibits superior resistance to steganalysis, increasing the detection error rates by an average of 3.30% across three steganalyzers.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - RIGI: Rectifying Image-to-3D Generation Inconsistency via Uncertainty-aware Learning [27.4552892119823]
inconsistencies in multi-view snapshots frequently introduce noise and artifacts along object boundaries, undermining the 3D reconstruction process.<n>We leverage 3D Gaussian Splatting (3DGS) for 3D reconstruction, and explicitly integrate uncertainty-aware learning into the reconstruction process.<n>We apply adaptive pixel-wise loss weighting to regularize the models, reducing reconstruction intensity in high-uncertainty regions.
arXiv Detail & Related papers (2024-11-28T02:19:28Z) - Active Generation for Image Classification [45.93535669217115]
We propose to address the efficiency of image generation by focusing on the specific needs and characteristics of the model.
With a central tenet of active learning, our method, named ActGen, takes a training-aware approach to image generation.
arXiv Detail & Related papers (2024-03-11T08:45:31Z) - Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - Forgery-aware Adaptive Transformer for Generalizable Synthetic Image
Detection [106.39544368711427]
We study the problem of generalizable synthetic image detection, aiming to detect forgery images from diverse generative methods.
We present a novel forgery-aware adaptive transformer approach, namely FatFormer.
Our approach tuned on 4-class ProGAN data attains an average of 98% accuracy to unseen GANs, and surprisingly generalizes to unseen diffusion models with 95% accuracy.
arXiv Detail & Related papers (2023-12-27T17:36:32Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - PRIS: Practical robust invertible network for image steganography [10.153270845070676]
Image steganography is a technique of hiding secret information inside another image, so that the secret is not visible to human eyes.
Most of the existing image steganography methods have low hiding robustness when the container images affected by distortion.
This paper proposed PRIS to improve the robustness of image steganography, it based on invertible neural networks.
arXiv Detail & Related papers (2023-09-24T12:29:13Z) - Towards Robust Image-in-Audio Deep Steganography [14.1081872409308]
This paper extends and enhances an existing image-in-audio deep steganography method by focusing on improving its robustness.
The proposed enhancements include modifications to the loss function, utilization of the Short-Time Fourier Transform (STFT), introduction of redundancy in the encoding process for error correction, and buffering of additional information in the pixel subconvolution operation.
arXiv Detail & Related papers (2023-03-09T03:16:04Z) - Generative and Discriminative Learning for Distorted Image Restoration [22.230017059874445]
Liquify is a technique for image editing, which can be used for image distortion.
We propose a novel generative and discriminative learning method based on deep neural networks.
arXiv Detail & Related papers (2020-11-11T14:01:29Z) - A Deep Ordinal Distortion Estimation Approach for Distortion Rectification [62.72089758481803]
We propose a novel distortion rectification approach that can obtain more accurate parameters with higher efficiency.
We design a local-global associated estimation network that learns the ordinal distortion to approximate the realistic distortion distribution.
Considering the redundancy of distortion information, our approach only uses a part of distorted image for the ordinal distortion estimation.
arXiv Detail & Related papers (2020-07-21T10:03:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.