Image steganography based on generative implicit neural representation
- URL: http://arxiv.org/abs/2406.01918v1
- Date: Tue, 4 Jun 2024 03:00:47 GMT
- Title: Image steganography based on generative implicit neural representation
- Authors: Zhong Yangjie, Liu Jia, Ke Yan, Liu Meiqi,
- Abstract summary: This paper proposes an image steganography based on generative implicit neural representation.
By fixing a neural network as the message extractor, we effectively redirect the training burden to the image itself.
The accuracy of message extraction attains an impressive mark of 100%.
- Score: 2.2972561982722346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the realm of advanced steganography, the scale of the model typically correlates directly with the resolution of the fundamental grid, necessitating the training of a distinct neural network for message extraction. This paper proposes an image steganography based on generative implicit neural representation. This approach transcends the constraints of image resolution by portraying data as continuous functional expressions. Notably, this method permits the utilization of a diverse array of multimedia data as cover images, thereby broadening the spectrum of potential carriers. Additionally, by fixing a neural network as the message extractor, we effectively redirect the training burden to the image itself, resulting in both a reduction in computational overhead and an enhancement in steganographic speed. This approach also circumvents potential transmission challenges associated with the message extractor. Experimental findings reveal that this methodology achieves a commendable optimization efficiency, achieving a completion time of just 3 seconds for 64x64 dimensional images, while concealing only 1 bpp of information. Furthermore, the accuracy of message extraction attains an impressive mark of 100%.
Related papers
- Generative Image Steganography Based on Point Cloud [2.141273115179375]
We propose a generative image steganography based on point cloud representation.
It can generate images with arbitrary resolution according to the actual need, and omits the need for explicit data for image steganography.
Experiments prove that the steganographic images generated by the scheme have very high image quality and the accuracy of message extraction reaches more than 99%.
arXiv Detail & Related papers (2024-10-15T15:06:13Z) - UnSegGNet: Unsupervised Image Segmentation using Graph Neural Networks [9.268228808049951]
This research contributes to the broader field of unsupervised medical imaging and computer vision.
It presents an innovative methodology for image segmentation that aligns with real-world challenges.
The proposed method holds promise for diverse applications, including medical imaging, remote sensing, and object recognition.
arXiv Detail & Related papers (2024-05-09T19:02:00Z) - Coarse-to-Fine Latent Diffusion for Pose-Guided Person Image Synthesis [65.7968515029306]
We propose a novel Coarse-to-Fine Latent Diffusion (CFLD) method for Pose-Guided Person Image Synthesis (PGPIS)
A perception-refined decoder is designed to progressively refine a set of learnable queries and extract semantic understanding of person images as a coarse-grained prompt.
arXiv Detail & Related papers (2024-02-28T06:07:07Z) - Source Identification: A Self-Supervision Task for Dense Prediction [8.744460886823322]
We propose a new self-supervision task called source identification (SI)
Synthetic images are generated by fusing multiple source images and the network's task is to reconstruct the original images, given the fused images.
We validate our method on two medical image segmentation tasks: brain tumor segmentation and white matter hyperintensities segmentation.
arXiv Detail & Related papers (2023-07-05T12:27:58Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Sharp-GAN: Sharpness Loss Regularized GAN for Histopathology Image
Synthesis [65.47507533905188]
Conditional generative adversarial networks have been applied to generate synthetic histopathology images.
We propose a sharpness loss regularized generative adversarial network to synthesize realistic histopathology images.
arXiv Detail & Related papers (2021-10-27T18:54:25Z) - Research on facial expression recognition based on Multimodal data
fusion and neural network [2.5431493111705943]
The algorithm is based on the multimodal data, and it takes the facial image, the histogram of oriented gradient of the image and the facial landmarks as the input.
Experimental results show that, benefiting by the complementarity of multimodal data, the algorithm has a great improvement in accuracy, robustness and detection speed.
arXiv Detail & Related papers (2021-09-26T23:45:40Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Self-Loop Uncertainty: A Novel Pseudo-Label for Semi-Supervised Medical
Image Segmentation [30.644905857223474]
We propose a semi-supervised approach to train neural networks with limited labeled data and a large quantity of unlabeled images for medical image segmentation.
A novel pseudo-label (namely self-loop uncertainty) is adopted as the ground-truth for the unlabeled images to augment the training set and boost the segmentation accuracy.
arXiv Detail & Related papers (2020-07-20T02:52:07Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.