Universal Deep Network for Steganalysis of Color Image based on Channel
Representation
- URL: http://arxiv.org/abs/2111.12231v1
- Date: Wed, 24 Nov 2021 02:22:13 GMT
- Title: Universal Deep Network for Steganalysis of Color Image based on Channel
Representation
- Authors: Kangkang Wei, Weiqi Luo, Shunquan Tan, Jiwu Huang
- Abstract summary: We design a universal color image steganalysis network (called UCNet) in spatial and JPEG domains.
The proposed method includes preprocessing, convolutional, and classification modules.
We conduct extensive experiments on ALASKA II to demonstrate that the proposed method can achieve state-of-the-art results.
- Score: 41.86330101334733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Up to now, most existing steganalytic methods are designed for grayscale
images, and they are not suitable for color images that are widely used in
current social networks. In this paper, we design a universal color image
steganalysis network (called UCNet) in spatial and JPEG domains. The proposed
method includes preprocessing, convolutional, and classification modules. To
preserve the steganographic artifacts in each color channel, in preprocessing
module, we firstly separate the input image into three channels according to
the corresponding embedding spaces (i.e. RGB for spatial steganography and
YCbCr for JPEG steganography), and then extract the image residuals with 62
fixed high-pass filters, finally concatenate all truncated residuals for
subsequent analysis rather than adding them together with normal convolution
like existing CNN-based steganalyzers. To accelerate the network convergence
and effectively reduce the number of parameters, in convolutional module, we
carefully design three types of layers with different shortcut connections and
group convolution structures to further learn high-level steganalytic features.
In classification module, we employ a global average pooling and fully
connected layer for classification. We conduct extensive experiments on ALASKA
II to demonstrate that the proposed method can achieve state-of-the-art results
compared with the modern CNN-based steganalyzers (e.g., SRNet and J-YeNet) in
both spatial and JPEG domains, while keeping relatively few memory requirements
and training time. Furthermore, we also provide necessary descriptions and many
ablation experiments to verify the rationality of the network design.
Related papers
- Multispectral Texture Synthesis using RGB Convolutional Neural Networks [2.3213238782019316]
State-of-the-art RGB texture synthesis algorithms rely on style distances that are computed through statistics of deep features.
We propose two solutions to extend these methods to multispectral imaging.
arXiv Detail & Related papers (2024-10-21T13:49:54Z) - ASCNet: Asymmetric Sampling Correction Network for Infrared Image Destriping [26.460122241870696]
We propose a novel infrared image destriping method called Asymmetric Sampling Correction Network (ASCNet)
Our ASCNet consists of three core elements: Residual Haar Discrete Wavelet Transform (RHDWT), Pixel Shuffle (PS), and Column Non-uniformity Correction Module (CNCM)
arXiv Detail & Related papers (2024-01-28T06:23:55Z) - Clothes Grasping and Unfolding Based on RGB-D Semantic Segmentation [21.950751953721817]
We propose a novel Bi-directional Fractal Cross Fusion Network (BiFCNet) for semantic segmentation.
We use RGB images with rich color features as input to our network in which the Fractal Cross Fusion module fuses RGB and depth data.
To reduce the cost of real data collection, we propose a data augmentation method based on an adversarial strategy.
arXiv Detail & Related papers (2023-05-05T03:21:55Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - Occlusion-Aware Instance Segmentation via BiLayer Network Architectures [73.45922226843435]
We propose Bilayer Convolutional Network (BCNet), where the top layer detects occluding objects (occluders) and the bottom layer infers partially occluded instances (occludees)
We investigate the efficacy of bilayer structure using two popular convolutional network designs, namely, Fully Convolutional Network (FCN) and Graph Convolutional Network (GCN)
arXiv Detail & Related papers (2022-08-08T21:39:26Z) - Distilling Ensemble of Explanations for Weakly-Supervised Pre-Training
of Image Segmentation Models [54.49581189337848]
We propose a method to enable the end-to-end pre-training for image segmentation models based on classification datasets.
The proposed method leverages a weighted segmentation learning procedure to pre-train the segmentation network en masse.
Experiment results show that, with ImageNet accompanied by PSSL as the source dataset, the proposed end-to-end pre-training strategy successfully boosts the performance of various segmentation models.
arXiv Detail & Related papers (2022-07-04T13:02:32Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Supervised Segmentation of Retinal Vessel Structures Using ANN [0.0]
The study was performed using 20 images in the DRIVE data set which is one of the most common retina data sets known.
The average segmentation accuracy for 20 images was found as 0.9492.
arXiv Detail & Related papers (2020-01-15T20:48:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.