Generative Image Steganography Based on Point Cloud
- URL: http://arxiv.org/abs/2410.11673v2
- Date: Tue, 22 Oct 2024 09:10:00 GMT
- Title: Generative Image Steganography Based on Point Cloud
- Authors: Zhong Yangjie, Liu Jia, Liu Meiqi, Ke Yan, Zhang Minqing,
- Abstract summary: We propose a generative image steganography based on point cloud representation.
It can generate images with arbitrary resolution according to the actual need, and omits the need for explicit data for image steganography.
Experiments prove that the steganographic images generated by the scheme have very high image quality and the accuracy of message extraction reaches more than 99%.
- Score: 2.141273115179375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In deep steganography, the model size is usually related to the underlying mesh resolution, and a separate neural network needs to be trained as a message extractor. In this paper, we propose a generative image steganography based on point cloud representation, which represents image data as a point cloud, learns the distribution of the point cloud data, and represents it in the form of a continuous function. This method breaks through the limitation of the image resolution, and can generate images with arbitrary resolution according to the actual need, and omits the need for explicit data for image steganography. At the same time, using a fixed point cloud extractor transfers the training of the network to the point cloud data, which saves the training time and avoids the risk of exposing the steganography behavior caused by the transmission of the message extractor. Experiments prove that the steganographic images generated by the scheme have very high image quality and the accuracy of message extraction reaches more than 99%.
Related papers
- Image steganography based on generative implicit neural representation [2.2972561982722346]
This paper proposes an image steganography based on generative implicit neural representation.
By fixing a neural network as the message extractor, we effectively redirect the training burden to the image itself.
The accuracy of message extraction attains an impressive mark of 100%.
arXiv Detail & Related papers (2024-06-04T03:00:47Z) - HVDistill: Transferring Knowledge from Images to Point Clouds via Unsupervised Hybrid-View Distillation [106.09886920774002]
We present a hybrid-view-based knowledge distillation framework, termed HVDistill, to guide the feature learning of a point cloud neural network.
Our method achieves consistent improvements over the baseline trained from scratch and significantly out- performs the existing schemes.
arXiv Detail & Related papers (2024-03-18T14:18:08Z) - Artifact Feature Purification for Cross-domain Detection of AI-generated Images [38.18870936370117]
Existing generated image detection methods suffer from performance drop when faced with out-of-domain generators and image scenes.
We propose Artifact Purification Network (APN) to facilitate the artifact extraction from generated images through the explicit and implicit purification processes.
For cross-generator detection, the average accuracy of APN is 5.6% 16.4% higher than the previous 10 methods on GenImage dataset and 1.7% 50.1% on DiffusionForensics dataset.
arXiv Detail & Related papers (2024-03-17T11:17:06Z) - Learned representation-guided diffusion models for large-image generation [58.192263311786824]
We introduce a novel approach that trains diffusion models conditioned on embeddings from self-supervised learning (SSL)
Our diffusion models successfully project these features back to high-quality histopathology and remote sensing images.
Augmenting real data by generating variations of real images improves downstream accuracy for patch-level and larger, image-scale classification tasks.
arXiv Detail & Related papers (2023-12-12T14:45:45Z) - Zero-shot spatial layout conditioning for text-to-image diffusion models [52.24744018240424]
Large-scale text-to-image diffusion models have significantly improved the state of the art in generative image modelling.
We consider image generation from text associated with segments on the image canvas, which combines an intuitive natural language interface with precise spatial control over the generated content.
We propose ZestGuide, a zero-shot segmentation guidance approach that can be plugged into pre-trained text-to-image diffusion models.
arXiv Detail & Related papers (2023-06-23T19:24:48Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Transcending Grids: Point Clouds and Surface Representations Powering
Neurological Processing [13.124650851374316]
In healthcare, accurately classifying medical images is vital, but conventional methods often hinge on medical data with a consistent grid structure.
Recent medical research has been focused on tweaking the architectures to attain better performance without giving due consideration to the representation of data.
We present a novel approach for transforming grid based data into its higher dimensional representations, leveraging unstructured point cloud data structures.
arXiv Detail & Related papers (2023-05-17T19:34:44Z) - Generative Steganographic Flow [39.64952038237487]
Generative steganography (GS) is a new data hiding manner, featuring direct generation of stego media from secret data.
Existing GS methods are generally criticized for their poor performances.
We propose a novel flow based GS approach -- Generative Steganographic Flow (GSF)
arXiv Detail & Related papers (2023-05-10T02:02:20Z) - Generative Steganography Diffusion [42.60159212701425]
Generative steganography (GS) is an emerging technique that generates stego images directly from secret data.
Existing GS methods cannot completely recover the hidden secret data due to the lack of network invertibility.
We propose a novel scheme called "Generative Steganography Diffusion" (GSD) by devising an invertible diffusion model named "StegoDiffusion"
arXiv Detail & Related papers (2023-05-05T12:29:22Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Discriminative Class Tokens for Text-to-Image Diffusion Models [107.98436819341592]
We propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text.
Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images.
We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier.
arXiv Detail & Related papers (2023-03-30T05:25:20Z) - DiP-GNN: Discriminative Pre-Training of Graph Neural Networks [49.19824331568713]
Graph neural network (GNN) pre-training methods have been proposed to enhance the power of GNNs.
One popular pre-training method is to mask out a proportion of the edges, and a GNN is trained to recover them.
In our framework, the graph seen by the discriminator better matches the original graph because the generator can recover a proportion of the masked edges.
arXiv Detail & Related papers (2022-09-15T17:41:50Z) - Generative Steganography Network [37.182458848616754]
We propose an advanced generative steganography network (GSN) that can generate realistic stego images without using cover images.
A module named secret block is designed delicately to conceal secret data in the feature maps during image generation.
arXiv Detail & Related papers (2022-07-28T03:34:37Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Image Steganography based on Style Transfer [12.756859984638961]
We propose image steganography network based on style transfer.
We embed secret information while transforming the content image style.
In latent space, the secret information is integrated into the latent representation of the cover image to generate the stego images.
arXiv Detail & Related papers (2022-03-09T02:58:29Z) - Towards Generating Real-World Time Series Data [52.51620668470388]
We propose a novel generative framework for time series data generation - RTSGAN.
RTSGAN learns an encoder-decoder module which provides a mapping between a time series instance and a fixed-dimension latent vector.
To generate time series with missing values, we further equip RTSGAN with an observation embedding layer and a decide-and-generate decoder.
arXiv Detail & Related papers (2021-11-16T11:31:37Z) - Unsupervised and Distributional Detection of Machine-Generated Text [1.552214657968262]
The power of natural language generation models has provoked a flurry of interest in automatic methods to detect if a piece of text is human or machine-authored.
We propose a method to detect those machine-generated documents leveraging repeated higher-order n-grams.
Our experiments show that leveraging that signal allows us to rank suspicious documents accurately.
arXiv Detail & Related papers (2021-11-04T14:07:46Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Using GANs to Augment Data for Cloud Image Segmentation Task [2.294014185517203]
We show the effectiveness of using Generative Adversarial Networks (GANs) to generate data to augment the training set.
We also present a way to estimate ground-truth binary maps for the GAN-generated images to facilitate their effective use as augmented images.
arXiv Detail & Related papers (2021-06-06T09:01:43Z) - Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation [53.95297550117153]
We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
arXiv Detail & Related papers (2021-05-29T08:39:57Z) - Data Augmentation for Object Detection via Differentiable Neural
Rendering [71.00447761415388]
It is challenging to train a robust object detector when annotated data is scarce.
Existing approaches to tackle this problem include semi-supervised learning that interpolates labeled data from unlabeled data.
We introduce an offline data augmentation method for object detection, which semantically interpolates the training data with novel views.
arXiv Detail & Related papers (2021-03-04T06:31:06Z) - Single Image Cloud Detection via Multi-Image Fusion [23.641624507709274]
A primary challenge in developing algorithms is the cost of collecting annotated training data.
We demonstrate how recent advances in multi-image fusion can be leveraged to bootstrap single image cloud detection.
We collect a large dataset of Sentinel-2 images along with a per-pixel semantic labelling for land cover.
arXiv Detail & Related papers (2020-07-29T22:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.