STD-NET: Search of Image Steganalytic Deep-learning Architecture via
Hierarchical Tensor Decomposition
- URL: http://arxiv.org/abs/2206.05651v1
- Date: Sun, 12 Jun 2022 03:46:08 GMT
- Title: STD-NET: Search of Image Steganalytic Deep-learning Architecture via
Hierarchical Tensor Decomposition
- Authors: Shunquan Tan and Qiushi Li and Laiyuan Li and Bin Li and Jiwu Huang
- Abstract summary: STD-NET is an unsupervised deep-learning architecture search approach via hierarchical tensor decomposition for image steganalysis.
Our proposed strategy is more efficient and can remove more redundancy compared with previous steganalytic network compression methods.
- Score: 40.997546601209145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies shows that the majority of existing deep steganalysis models
have a large amount of redundancy, which leads to a huge waste of storage and
computing resources. The existing model compression method cannot flexibly
compress the convolutional layer in residual shortcut block so that a
satisfactory shrinking rate cannot be obtained. In this paper, we propose
STD-NET, an unsupervised deep-learning architecture search approach via
hierarchical tensor decomposition for image steganalysis. Our proposed strategy
will not be restricted by various residual connections, since this strategy
does not change the number of input and output channels of the convolution
block. We propose a normalized distortion threshold to evaluate the sensitivity
of each involved convolutional layer of the base model to guide STD-NET to
compress target network in an efficient and unsupervised approach, and obtain
two network structures of different shapes with low computation cost and
similar performance compared with the original one. Extensive experiments have
confirmed that, on one hand, our model can achieve comparable or even better
detection performance in various steganalytic scenarios due to the great
adaptivity of the obtained network architecture. On the other hand, the
experimental results also demonstrate that our proposed strategy is more
efficient and can remove more redundancy compared with previous steganalytic
network compression methods.
Related papers
- DNAD: Differentiable Neural Architecture Distillation [6.026956571669411]
Differentiable neural architecture distillation (DNAD) algorithm is developed based on two cores, namely search by deleting and search by imitating.
DNAD achieves the top-1 error rate of 23.7% on ImageNet classification with a model of 6.0M parameters and 598M FLOPs.
Super-network progressive shrinking (SNPS) algorithm is developed based on the framework of differentiable architecture search (DARTS)
arXiv Detail & Related papers (2025-04-25T08:49:31Z) - Multi-Scale Invertible Neural Network for Wide-Range Variable-Rate Learned Image Compression [90.59962443790593]
In this paper, we present a variable-rate image compression model based on invertible transform to overcome limitations.
Specifically, we design a lightweight multi-scale invertible neural network, which maps the input image into multi-scale latent representations.
Experimental results demonstrate that the proposed method achieves state-of-the-art performance compared to existing variable-rate methods.
arXiv Detail & Related papers (2025-03-27T09:08:39Z) - GFN: A graph feedforward network for resolution-invariant reduced operator learning in multifidelity applications [0.0]
This work presents a novel resolution-invariant model order reduction strategy for multifidelity applications.
We base our architecture on a novel neural network layer developed in this work, the graph feedforward network.
We exploit the method's capability of training and testing on different mesh sizes in an autoencoder-based reduction strategy for parametrised partial differential equations.
arXiv Detail & Related papers (2024-06-05T18:31:37Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Neural Network Compression via Effective Filter Analysis and
Hierarchical Pruning [41.19516938181544]
Current network compression methods have two open problems: first, there lacks a theoretical framework to estimate the maximum compression rate; second, some layers may get over-prunned, resulting in significant network performance drop.
This study propose a gradient-matrix singularity analysis-based method to estimate the maximum network redundancy.
Guided by that maximum rate, a novel and efficient hierarchical network pruning algorithm is developed to maximally condense the neuronal network structure without sacrificing network performance.
arXiv Detail & Related papers (2022-06-07T21:30:47Z) - STN: Scalable Tensorizing Networks via Structure-Aware Training and
Adaptive Compression [10.067082377396586]
We propose Scalableizing Networks (STN), which adaptively adjust the model size and decomposition structure without retraining.
STN is compatible with arbitrary network architectures and achieves higher compression performance and flexibility over other tensorizing versions.
arXiv Detail & Related papers (2022-05-30T15:50:48Z) - Image Superresolution using Scale-Recurrent Dense Network [30.75380029218373]
Recent advances in the design of convolutional neural network (CNN) have yielded significant improvements in the performance of image super-resolution (SR)
We propose a scale recurrent SR architecture built upon units containing series of dense connections within a residual block (Residual Dense Blocks (RDBs))
Our scale recurrent design delivers competitive performance for higher scale factors while being parametrically more efficient as compared to current state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-28T09:18:43Z) - Low-rank Tensor Decomposition for Compression of Convolutional Neural
Networks Using Funnel Regularization [1.8579693774597708]
We propose a model reduction method to compress the pre-trained networks using low-rank tensor decomposition.
A new regularization method, called funnel function, is proposed to suppress the unimportant factors during the compression.
For ResNet18 with ImageNet2012, our reduced model can reach more than twi times speed up in terms of GMAC with merely 0.7% Top-1 accuracy drop.
arXiv Detail & Related papers (2021-12-07T13:41:51Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - Fitting the Search Space of Weight-sharing NAS with Graph Convolutional
Networks [100.14670789581811]
We train a graph convolutional network to fit the performance of sampled sub-networks.
With this strategy, we achieve a higher rank correlation coefficient in the selected set of candidates.
arXiv Detail & Related papers (2020-04-17T19:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.