Towards Deep Network Steganography: From Networks to Networks
- URL: http://arxiv.org/abs/2307.03444v1
- Date: Fri, 7 Jul 2023 08:02:01 GMT
- Title: Towards Deep Network Steganography: From Networks to Networks
- Authors: Guobiao Li, Sheng Li, Meiling Li, Zhenxing Qian, Xinpeng Zhang
- Abstract summary: We propose deep network steganography for the covert communication of DNN models.
Our scheme is learning task oriented, where the learning task of the secret DNN model is disguised into another ordinary learning task.
We conduct experiments on both the intra-task steganography and inter-task steganography.
- Score: 23.853644434004135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the widespread applications of the deep neural network (DNN), how to
covertly transmit the DNN models in public channels brings us the attention,
especially for those trained for secret-learning tasks. In this paper, we
propose deep network steganography for the covert communication of DNN models.
Unlike the existing steganography schemes which focus on the subtle
modification of the cover data to accommodate the secrets, our scheme is
learning task oriented, where the learning task of the secret DNN model (termed
as secret-learning task) is disguised into another ordinary learning task
conducted in a stego DNN model (termed as stego-learning task). To this end, we
propose a gradient-based filter insertion scheme to insert interference filters
into the important positions in the secret DNN model to form a stego DNN model.
These positions are then embedded into the stego DNN model using a key by side
information hiding. Finally, we activate the interference filters by a partial
optimization strategy, such that the generated stego DNN model works on the
stego-learning task. We conduct the experiments on both the intra-task
steganography and inter-task steganography (i.e., the secret and stego-learning
tasks belong to the same and different categories), both of which demonstrate
the effectiveness of our proposed method for covert communication of DNN
models.
Related papers
- Self-Distillation Learning Based on Temporal-Spatial Consistency for Spiking Neural Networks [3.7748662901422807]
Spiking neural networks (SNNs) have attracted considerable attention for their event-driven, low-power characteristics and high biological interpretability.
Recent research has improved the performance of the SNN model with a pre-trained teacher model.
In this paper, we explore cost-effective self-distillation learning of SNNs to circumvent these concerns.
arXiv Detail & Related papers (2024-06-12T04:30:40Z) - Constructing Deep Spiking Neural Networks from Artificial Neural
Networks with Knowledge Distillation [20.487853773309563]
Spiking neural networks (SNNs) are well known as the brain-inspired models with high computing efficiency.
We propose a novel method of constructing deep SNN models with knowledge distillation (KD)
arXiv Detail & Related papers (2023-04-12T05:57:21Z) - Joint ANN-SNN Co-training for Object Localization and Image Segmentation [0.0]
Spiking neural networks (SNNs) have emerged as a low-power alternative to deep artificial neural networks (ANNs)
We propose a novel hybrid ANN-SNN co-training framework to improve the performance of converted SNNs.
arXiv Detail & Related papers (2023-03-10T14:45:02Z) - Steganography of Steganographic Networks [23.85364443400414]
Steganography is a technique for covert communication between two parties.
We propose a novel scheme for steganography of steganographic networks in this paper.
arXiv Detail & Related papers (2023-02-28T12:27:34Z) - What Do Deep Nets Learn? Class-wise Patterns Revealed in the Input Space [88.37185513453758]
We propose a method to visualize and understand the class-wise knowledge learned by deep neural networks (DNNs) under different settings.
Our method searches for a single predictive pattern in the pixel space to represent the knowledge learned by the model for each class.
In the adversarial setting, we show that adversarially trained models tend to learn more simplified shape patterns.
arXiv Detail & Related papers (2021-01-18T06:38:41Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Distilling Spikes: Knowledge Distillation in Spiking Neural Networks [22.331135708302586]
Spiking Neural Networks (SNNs) are energy-efficient computing architectures that exchange spikes for processing information.
We propose techniques for knowledge distillation in spiking neural networks for the task of image classification.
Our approach is expected to open up new avenues for deploying high performing large SNN models on resource-constrained hardware platforms.
arXiv Detail & Related papers (2020-05-01T09:36:32Z) - Architecture Disentanglement for Deep Neural Networks [174.16176919145377]
We introduce neural architecture disentanglement (NAD) to explain the inner workings of deep neural networks (DNNs)
NAD learns to disentangle a pre-trained DNN into sub-architectures according to independent tasks, forming information flows that describe the inference processes.
Results show that misclassified images have a high probability of being assigned to task sub-architectures similar to the correct ones.
arXiv Detail & Related papers (2020-03-30T08:34:33Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.