JPEG Steganalysis Based on Steganographic Feature Enhancement and Graph
Attention Learning
- URL: http://arxiv.org/abs/2302.02276v1
- Date: Sun, 5 Feb 2023 01:42:19 GMT
- Title: JPEG Steganalysis Based on Steganographic Feature Enhancement and Graph
Attention Learning
- Authors: Qiyun Liu, Zhiguang Yang and Hanzhou Wu
- Abstract summary: We introduce a novel representation learning algorithm for JPEG steganalysis.
The graph attention learning module is designed to avoid global feature loss caused by the local feature learning of convolutional neural network.
The feature enhancement module is applied to prevent the stacking of convolutional layers from weakening the steganographic information.
- Score: 15.652077779677091
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The purpose of image steganalysis is to determine whether the carrier image
contains hidden information or not. Since JEPG is the most commonly used image
format over social networks, steganalysis in JPEG images is also the most
urgently needed to be explored. However, in order to detect whether secret
information is hidden within JEPG images, the majority of existing algorithms
are designed in conjunction with the popular computer vision related networks,
without considering the key characteristics appeared in image steganalysis. It
is crucial that the steganographic signal, as an extremely weak signal, can be
enhanced during its representation learning process. Motivated by this insight,
in this paper, we introduce a novel representation learning algorithm for JPEG
steganalysis that is mainly consisting of a graph attention learning module and
a feature enhancement module. The graph attention learning module is designed
to avoid global feature loss caused by the local feature learning of
convolutional neural network and reliance on depth stacking to extend the
perceptual domain. The feature enhancement module is applied to prevent the
stacking of convolutional layers from weakening the steganographic information.
In addition, pretraining as a way to initialize the network weights with a
large-scale dataset is utilized to enhance the ability of the network to
extract discriminative features. We advocate pretraining with ALASKA2 for the
model trained with BOSSBase+BOWS2. The experimental results indicate that the
proposed algorithm outperforms previous arts in terms of detection accuracy,
which has verified the superiority and applicability of the proposed work.
Related papers
- Unleashing the Power of Depth and Pose Estimation Neural Networks by
Designing Compatible Endoscopic Images [12.412060445862842]
We conduct a detail analysis of the properties of endoscopic images and improve the compatibility of images and neural networks.
First, we introcude the Mask Image Modelling (MIM) module, which inputs partial image information instead of complete image information.
Second, we propose a lightweight neural network to enhance the endoscopic images, to explicitly improve the compatibility between images and neural networks.
arXiv Detail & Related papers (2023-09-14T02:19:38Z) - Masked Contrastive Graph Representation Learning for Age Estimation [44.96502862249276]
This paper utilizes the property of graph representation learning in dealing with image redundancy information.
We propose a novel Masked Contrastive Graph Representation Learning (MCGRL) method for age estimation.
Experimental results on real-world face image datasets demonstrate the superiority of our proposed method over other state-of-the-art age estimation approaches.
arXiv Detail & Related papers (2023-06-16T15:53:21Z) - Intelligent Masking: Deep Q-Learning for Context Encoding in Medical
Image Analysis [48.02011627390706]
We develop a novel self-supervised approach that occludes targeted regions to improve the pre-training procedure.
We show that training the agent against the prediction model can significantly improve the semantic features extracted for downstream classification tasks.
arXiv Detail & Related papers (2022-03-25T19:05:06Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - Semantic-Aware Generation for Self-Supervised Visual Representation
Learning [116.5814634936371]
We advocate for Semantic-aware Generation (SaGe) to facilitate richer semantics rather than details to be preserved in the generated image.
SaGe complements the target network with view-specific features and thus alleviates the semantic degradation brought by intensive data augmentations.
We execute SaGe on ImageNet-1K and evaluate the pre-trained models on five downstream tasks including nearest neighbor test, linear classification, and fine-scaled image recognition.
arXiv Detail & Related papers (2021-11-25T16:46:13Z) - Graph Representation Learning for Spatial Image Steganalysis [11.358487655918678]
We introduce a graph representation learning architecture for spatial image steganalysis.
In the detailed architecture, we translate each image to a graph, where nodes represent the patches of the image and edges indicate the local associations between the patches.
By feeding the graph to an attention network, the discriminative features can be learned for efficient steganalysis.
arXiv Detail & Related papers (2021-10-03T09:09:08Z) - HistoTransfer: Understanding Transfer Learning for Histopathology [9.231495418218813]
We compare the performance of features extracted from networks trained on ImageNet and histopathology data.
We investigate if features learned using more complex networks lead to gain in performance.
arXiv Detail & Related papers (2021-06-13T18:55:23Z) - Group-Wise Semantic Mining for Weakly Supervised Semantic Segmentation [49.90178055521207]
This work addresses weakly supervised semantic segmentation (WSSS), with the goal of bridging the gap between image-level annotations and pixel-level segmentation.
We formulate WSSS as a novel group-wise learning task that explicitly models semantic dependencies in a group of images to estimate more reliable pseudo ground-truths.
In particular, we devise a graph neural network (GNN) for group-wise semantic mining, wherein input images are represented as graph nodes.
arXiv Detail & Related papers (2020-12-09T12:40:13Z) - Spatio-Temporal Inception Graph Convolutional Networks for
Skeleton-Based Action Recognition [126.51241919472356]
We design a simple and highly modularized graph convolutional network architecture for skeleton-based action recognition.
Our network is constructed by repeating a building block that aggregates multi-granularity information from both the spatial and temporal paths.
arXiv Detail & Related papers (2020-11-26T14:43:04Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.