Securing Fixed Neural Network Steganography
- URL: http://arxiv.org/abs/2309.09700v1
- Date: Mon, 18 Sep 2023 12:07:37 GMT
- Title: Securing Fixed Neural Network Steganography
- Authors: Zicong Luo, Sheng Li, Guobiao Li, Zhenxing Qian and Xinpeng Zhang
- Abstract summary: Image steganography is the art of concealing secret information in images in a way that is imperceptible to unauthorized parties.
Recent advances show that is possible to use a fixed neural network (FNN) for secret embedding and extraction.
We propose a key-based FNNS scheme to improve the security of the FNNS.
- Score: 37.08937194546323
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image steganography is the art of concealing secret information in images in
a way that is imperceptible to unauthorized parties. Recent advances show that
is possible to use a fixed neural network (FNN) for secret embedding and
extraction. Such fixed neural network steganography (FNNS) achieves high
steganographic performance without training the networks, which could be more
useful in real-world applications. However, the existing FNNS schemes are
vulnerable in the sense that anyone can extract the secret from the
stego-image. To deal with this issue, we propose a key-based FNNS scheme to
improve the security of the FNNS, where we generate key-controlled
perturbations from the FNN for data embedding. As such, only the receiver who
possesses the key is able to correctly extract the secret from the stego-image
using the FNN. In order to improve the visual quality and undetectability of
the stego-image, we further propose an adaptive perturbation optimization
strategy by taking the perturbation cost into account. Experimental results
show that our proposed scheme is capable of preventing unauthorized secret
extraction from the stego-images. Furthermore, our scheme is able to generate
stego-images with higher visual quality than the state-of-the-art FNNS scheme,
especially when the FNN is a neural network for ordinary learning tasks.
Related papers
- Cover-separable Fixed Neural Network Steganography via Deep Generative Models [37.08937194546323]
We propose a Cover-separable Fixed Neural Network Steganography, namely Cs-FNNS.
In Cs-FNNS, we propose a Steganographic Perturbation Search (SPS) algorithm to directly encode the secret data into an imperceptible perturbation.
We demonstrate the superior performance of the proposed method in terms of visual quality and undetectability.
arXiv Detail & Related papers (2024-07-16T05:47:06Z) - When Spiking neural networks meet temporal attention image decoding and adaptive spiking neuron [7.478056407323783]
Spiking Neural Networks (SNNs) are capable of encoding and processing temporal information in a biologically plausible way.
We propose a novel method for image decoding based on temporal attention (TAID) and an adaptive Leaky-Integrate-and-Fire neuron model.
arXiv Detail & Related papers (2024-06-05T08:21:55Z) - Defending Spiking Neural Networks against Adversarial Attacks through Image Purification [20.492531851480784]
Spiking Neural Networks (SNNs) aim to bridge the gap between neuroscience and machine learning.
SNNs are vulnerable to adversarial attacks like convolutional neural networks.
We propose a biologically inspired methodology to enhance the robustness of SNNs.
arXiv Detail & Related papers (2024-04-26T00:57:06Z) - TBSN: Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising [94.09442506816724]
Blind-spot networks (BSN) have been prevalent network architectures in self-supervised image denoising (SSID)
We present a transformer-based blind-spot network (TBSN) by analyzing and redesigning the transformer operators that meet the blind-spot requirement.
For spatial self-attention, an elaborate mask is applied to the attention matrix to restrict its receptive field, thus mimicking the dilated convolution.
For channel self-attention, we observe that it may leak the blind-spot information when the channel number is greater than spatial size in the deep layers of multi-scale architectures.
arXiv Detail & Related papers (2024-04-11T15:39:10Z) - Steganography of Steganographic Networks [23.85364443400414]
Steganography is a technique for covert communication between two parties.
We propose a novel scheme for steganography of steganographic networks in this paper.
arXiv Detail & Related papers (2023-02-28T12:27:34Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware [48.7072217216104]
Deep neural networks (DNNs) have structural malware' (i.e., compromised weights and activation pathways)
It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data)
Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, fingerprints' its nonlinearity, and allows us to detect backdoors (if present)
Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus
arXiv Detail & Related papers (2020-07-31T23:52:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.