Targeted Pooled Latent-Space Steganalysis Applied to Generative Steganography, with a Fix
- URL: http://arxiv.org/abs/2510.12414v1
- Date: Tue, 14 Oct 2025 11:46:47 GMT
- Title: Targeted Pooled Latent-Space Steganalysis Applied to Generative Steganography, with a Fix
- Authors: Etienne Levecque, Aurélien Noirault, Tomáš Pevný, Jan Butora, Patrick Bas, Rémi Cogranne,
- Abstract summary: Steganographic schemes dedicated to generated images modify the seed vector in the latent space to embed a message.<n>This paper proposes to perform steganalysis in the latent space by modeling the statistical distribution of the norm of the latent vector.
- Score: 13.484668376977604
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Steganographic schemes dedicated to generated images modify the seed vector in the latent space to embed a message, whereas most steganalysis methods attempt to detect the embedding in the image space. This paper proposes to perform steganalysis in the latent space by modeling the statistical distribution of the norm of the latent vector. Specifically, we analyze the practical security of a scheme proposed by Hu et. al. for latent diffusion models, which is both robust and practically undetectable when steganalysis is performed on generated images. We show that after embedding, the Stego (latent) vector is distributed on a hypersphere while the Cover vector is i.i.d. Gaussian. By going from the image space to the latent space, we show that it is possible to model the norm of the vector in the latent space under the Cover or Stego hypothesis as Gaussian distributions with different variances. A Likelihood Ratio Test is then derived to perform pooled steganalysis. The impact of the potential knowledge of the prompt and the number of diffusion steps, is also studied. Additionally, we also show how, by randomly sampling the norm of the latent vector before generation, the initial Stego scheme becomes undetectable in the latent space.
Related papers
- VAE with Hyperspherical Coordinates: Improving Anomaly Detection from Hypervolume-Compressed Latent Space [56.362776482614976]
Variational autoencoders (VAE) encode data into lower-dimensional latent vectors before decoding those vectors back to data.<n>We propose to formulate the latent variables of a VAE using hyperspherical coordinates, which allows compressing the latent vectors towards a given direction on the hypersphere.<n>We show that this improves both the fully unsupervised and OOD anomaly detection ability of the VAE, achieving the best performance on the datasets we considered.
arXiv Detail & Related papers (2026-01-25T03:10:24Z) - Probability Density Geodesics in Image Diffusion Latent Space [57.99700072218375]
We show that geodesic diffusions can be computed in latent space.<n>We analyze how closely video clips approximate geodesics in a pre-trained image diffusion space.
arXiv Detail & Related papers (2025-04-09T08:28:53Z) - Exploiting Diffusion Prior for Generalizable Dense Prediction [85.4563592053464]
Recent advanced Text-to-Image (T2I) diffusion models are sometimes too imaginative for existing off-the-shelf dense predictors to estimate.
We introduce DMP, a pipeline utilizing pre-trained T2I models as a prior for dense prediction tasks.
Despite limited-domain training data, the approach yields faithful estimations for arbitrary images, surpassing existing state-of-the-art algorithms.
arXiv Detail & Related papers (2023-11-30T18:59:44Z) - Discovery and Expansion of New Domains within Diffusion Models [41.25905891327446]
We study the generalization properties of diffusion models in a fewshot setup.
We introduce a novel tuning-free paradigm to synthesize the target out-of-domain data.
arXiv Detail & Related papers (2023-10-13T16:07:31Z) - Effect of latent space distribution on the segmentation of images with
multiple annotations [5.054729045700466]
We propose the Generalized Probabilistic U-Net, which extends the Probabilistic U-Net by allowing more general forms of the Gaussian distribution as the latent space distribution.
We study the effect the choice of latent space distribution has on capturing the variation in the reference segmentations for lung tumors and white matter hyperintensities in the brain.
arXiv Detail & Related papers (2023-04-26T12:00:00Z) - Regularized Vector Quantization for Tokenized Image Synthesis [126.96880843754066]
Quantizing images into discrete representations has been a fundamental problem in unified generative modeling.
deterministic quantization suffers from severe codebook collapse and misalignment with inference stage while quantization suffers from low codebook utilization and reconstruction objective.
This paper presents a regularized vector quantization framework that allows to mitigate perturbed above issues effectively by applying regularization from two perspectives.
arXiv Detail & Related papers (2023-03-11T15:20:54Z) - An Energy-Based Prior for Generative Saliency [62.79775297611203]
We propose a novel generative saliency prediction framework that adopts an informative energy-based model as a prior distribution.
With the generative saliency model, we can obtain a pixel-wise uncertainty map from an image, indicating model confidence in the saliency prediction.
Experimental results show that our generative saliency model with an energy-based prior can achieve not only accurate saliency predictions but also reliable uncertainty maps consistent with human perception.
arXiv Detail & Related papers (2022-04-19T10:51:00Z) - Fast ABC with joint generative modelling and subset simulation [0.6445605125467573]
We propose a novel approach for solving inverse-problems with high-dimensional inputs and an expensive forward mapping.
It leverages joint deep generative modelling to transfer the original problem spaces to a lower dimensional latent space.
arXiv Detail & Related papers (2021-04-16T15:03:23Z) - CQ-VAE: Coordinate Quantized VAE for Uncertainty Estimation with
Application to Disk Shape Analysis from Lumbar Spine MRI Images [1.5841288368322592]
We propose a powerful generative model to learn a representation of ambiguity and to generate probabilistic outputs.
Our model, named Coordinate Quantization Variational Autoencoder (CQ-VAE), employs a discrete latent space with an internal discrete probability distribution.
A matching algorithm is used to establish the correspondence between model-generated samples and "ground-truth" samples.
arXiv Detail & Related papers (2020-10-17T04:25:32Z) - Improving Inversion and Generation Diversity in StyleGAN using a
Gaussianized Latent Space [41.20193123974535]
Modern Generative Adversarial Networks are capable of creating artificial, photorealistic images from latent vectors living in a low-dimensional learned latent space.
We show that, under a simple nonlinear operation, the data distribution can be modeled as Gaussian and therefore expressed using sufficient statistics.
The resulting projections lie in smoother and better behaved regions of the latent space, as shown using performance for both real and generated images.
arXiv Detail & Related papers (2020-09-14T15:45:58Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z) - Gaussian-Dirichlet Random Fields for Inference over High Dimensional
Categorical Observations [3.383942690870476]
We propose a generative model for the distribution of high dimensional categorical observations produced by robots.
The proposed approach combines the use of Dirichlet distributions to sparse co-occurrence relations between the observed categories.
arXiv Detail & Related papers (2020-03-26T19:29:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.