Generative Image Steganography Based on Point Cloud
- URL: http://arxiv.org/abs/2410.11673v2
- Date: Tue, 22 Oct 2024 09:10:00 GMT
- Title: Generative Image Steganography Based on Point Cloud
- Authors: Zhong Yangjie, Liu Jia, Liu Meiqi, Ke Yan, Zhang Minqing,
- Abstract summary: We propose a generative image steganography based on point cloud representation.
It can generate images with arbitrary resolution according to the actual need, and omits the need for explicit data for image steganography.
Experiments prove that the steganographic images generated by the scheme have very high image quality and the accuracy of message extraction reaches more than 99%.
- Score: 2.141273115179375
- License:
- Abstract: In deep steganography, the model size is usually related to the underlying mesh resolution, and a separate neural network needs to be trained as a message extractor. In this paper, we propose a generative image steganography based on point cloud representation, which represents image data as a point cloud, learns the distribution of the point cloud data, and represents it in the form of a continuous function. This method breaks through the limitation of the image resolution, and can generate images with arbitrary resolution according to the actual need, and omits the need for explicit data for image steganography. At the same time, using a fixed point cloud extractor transfers the training of the network to the point cloud data, which saves the training time and avoids the risk of exposing the steganography behavior caused by the transmission of the message extractor. Experiments prove that the steganographic images generated by the scheme have very high image quality and the accuracy of message extraction reaches more than 99%.
Related papers
- Image steganography based on generative implicit neural representation [2.2972561982722346]
This paper proposes an image steganography based on generative implicit neural representation.
By fixing a neural network as the message extractor, we effectively redirect the training burden to the image itself.
The accuracy of message extraction attains an impressive mark of 100%.
arXiv Detail & Related papers (2024-06-04T03:00:47Z) - HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation [106.09886920774002]
We present a hybrid-view-based knowledge distillation framework, termed HVDistill, to guide the feature learning of a point cloud neural network.
Our method achieves consistent improvements over the baseline trained from scratch and significantly out- performs the existing schemes.
arXiv Detail & Related papers (2024-03-18T14:18:08Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Zero-shot spatial layout conditioning for text-to-image diffusion models [52.24744018240424]
Large-scale text-to-image diffusion models have significantly improved the state of the art in generative image modelling.
We consider image generation from text associated with segments on the image canvas, which combines an intuitive natural language interface with precise spatial control over the generated content.
We propose ZestGuide, a zero-shot segmentation guidance approach that can be plugged into pre-trained text-to-image diffusion models.
arXiv Detail & Related papers (2023-06-23T19:24:48Z) - Transcending Grids: Point Clouds and Surface Representations Powering
Neurological Processing [13.124650851374316]
In healthcare, accurately classifying medical images is vital, but conventional methods often hinge on medical data with a consistent grid structure.
Recent medical research has been focused on tweaking the architectures to attain better performance without giving due consideration to the representation of data.
We present a novel approach for transforming grid based data into its higher dimensional representations, leveraging unstructured point cloud data structures.
arXiv Detail & Related papers (2023-05-17T19:34:44Z) - Discriminative Class Tokens for Text-to-Image Diffusion Models [107.98436819341592]
We propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text.
Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images.
We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier.
arXiv Detail & Related papers (2023-03-30T05:25:20Z) - Image Steganography based on Style Transfer [12.756859984638961]
We propose image steganography network based on style transfer.
We embed secret information while transforming the content image style.
In latent space, the secret information is integrated into the latent representation of the cover image to generate the stego images.
arXiv Detail & Related papers (2022-03-09T02:58:29Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Using GANs to Augment Data for Cloud Image Segmentation Task [2.294014185517203]
We show the effectiveness of using Generative Adversarial Networks (GANs) to generate data to augment the training set.
We also present a way to estimate ground-truth binary maps for the GAN-generated images to facilitate their effective use as augmented images.
arXiv Detail & Related papers (2021-06-06T09:01:43Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Single Image Cloud Detection via Multi-Image Fusion [23.641624507709274]
A primary challenge in developing algorithms is the cost of collecting annotated training data.
We demonstrate how recent advances in multi-image fusion can be leveraged to bootstrap single image cloud detection.
We collect a large dataset of Sentinel-2 images along with a per-pixel semantic labelling for land cover.
arXiv Detail & Related papers (2020-07-29T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.