Multi-Contextual Design of Convolutional Neural Network for Steganalysis
- URL: http://arxiv.org/abs/2106.10430v1
- Date: Sat, 19 Jun 2021 05:38:52 GMT
- Title: Multi-Contextual Design of Convolutional Neural Network for Steganalysis
- Authors: Brijesh Singh, Arijit Sur, and Pinaki Mitra
- Abstract summary: It is observed that recent steganographic embedding does not always restrict their embedding in the high-frequency zone; instead, they distribute it as per embedding policy.
In this work, unlike the conventional approaches, the proposed model first extracts the noise residual using learned denoising kernels to boost the signal-to-noise ratio.
After preprocessing, the sparse noise residuals are fed to a novel Multi-Contextual Convolutional Neural Network (M-CNET) that uses heterogeneous context size to learn the sparse and low-amplitude representation of noise residuals.
- Score: 8.631228373008478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent times, deep learning-based steganalysis classifiers became popular
due to their state-of-the-art performance. Most deep steganalysis classifiers
usually extract noise residuals using high-pass filters as preprocessing steps
and feed them to their deep model for classification. It is observed that
recent steganographic embedding does not always restrict their embedding in the
high-frequency zone; instead, they distribute it as per embedding policy.
Therefore, besides noise residual, learning the embedding zone is another
challenging task. In this work, unlike the conventional approaches, the
proposed model first extracts the noise residual using learned denoising
kernels to boost the signal-to-noise ratio. After preprocessing, the sparse
noise residuals are fed to a novel Multi-Contextual Convolutional Neural
Network (M-CNET) that uses heterogeneous context size to learn the sparse and
low-amplitude representation of noise residuals. The model performance is
further improved by incorporating the Self-Attention module to focus on the
areas prone to steganalytic embedding. A set of comprehensive experiments is
performed to show the proposed scheme's efficacy over the prior arts. Besides,
an ablation study is given to justify the contribution of various modules of
the proposed architecture.
Related papers
- Policy Gradient-Driven Noise Mask [3.69758875412828]
We propose a novel pretraining pipeline that learns to generate conditional noise masks specifically tailored to improve performance on multi-modal and multi-organ datasets.
A key aspect is that the policy network's role is limited to obtaining an intermediate (or heated) model before fine-tuning.
Results demonstrate that fine-tuning the intermediate models consistently outperforms conventional training algorithms on both classification and generalization to unseen concept tasks.
arXiv Detail & Related papers (2024-04-29T23:53:42Z) - Learning with Noisy Foundation Models [95.50968225050012]
This paper is the first work to comprehensively understand and analyze the nature of noise in pre-training datasets.
We propose a tuning method (NMTune) to affine the feature space to mitigate the malignant effect of noise and improve generalization.
arXiv Detail & Related papers (2024-03-11T16:22:41Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Denoising Diffusion Semantic Segmentation with Mask Prior Modeling [61.73352242029671]
We propose to ameliorate the semantic segmentation quality of existing discriminative approaches with a mask prior modeled by a denoising diffusion generative model.
We evaluate the proposed prior modeling with several off-the-shelf segmentors, and our experimental results on ADE20K and Cityscapes demonstrate that our approach could achieve competitively quantitative performance.
arXiv Detail & Related papers (2023-06-02T17:47:01Z) - DiffusionAD: Norm-guided One-step Denoising Diffusion for Anomaly
Detection [89.49600182243306]
We reformulate the reconstruction process using a diffusion model into a noise-to-norm paradigm.
We propose a rapid one-step denoising paradigm, significantly faster than the traditional iterative denoising in diffusion models.
The segmentation sub-network predicts pixel-level anomaly scores using the input image and its anomaly-free restoration.
arXiv Detail & Related papers (2023-03-15T16:14:06Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Poisson2Sparse: Self-Supervised Poisson Denoising From a Single Image [34.27748767631027]
We present a novel self-supervised learning method for single-image denoising.
We approximate traditional iterative optimization algorithms for image denoising with a recurrent neural network.
Our method outperforms the state-of-the-art approaches in terms of PSNR and SSIM.
arXiv Detail & Related papers (2022-06-04T00:08:58Z) - PD-Flow: A Point Cloud Denoising Framework with Normalizing Flows [20.382995180671205]
Point cloud denoising aims to restore clean point clouds from raw observations corrupted by noise and outliers.
We present a novel deep learning-based denoising model, that incorporates normalizing flows and noise disentanglement techniques.
arXiv Detail & Related papers (2022-03-11T14:17:58Z) - Heavy-tailed denoising score matching [5.371337604556311]
We develop an iterative noise scaling algorithm to consistently initialise the multiple levels of noise in Langevin dynamics.
On the practical side, our use of heavy-tailed DSM leads to improved score estimation, controllable sampling convergence, and more balanced unconditional generative performance for imbalanced datasets.
arXiv Detail & Related papers (2021-12-17T22:04:55Z) - LAAT: Locally Aligned Ant Technique for discovering multiple faint low
dimensional structures of varying density [0.0]
In manifold learning, several studies indicate solutions for removing background noise or noise close to the structure when the density is substantially higher than that exhibited by the noise.
We propose a novel method to extract manifold points in the presence of noise based on the idea of Ant colony optimization.
In contrast to the existing random walk solutions, our technique captures points that are locally aligned with major directions of the manifold.
arXiv Detail & Related papers (2020-09-17T14:22:50Z) - Simultaneous Denoising and Dereverberation Using Deep Embedding Features [64.58693911070228]
We propose a joint training method for simultaneous speech denoising and dereverberation using deep embedding features.
At the denoising stage, the DC network is leveraged to extract noise-free deep embedding features.
At the dereverberation stage, instead of using the unsupervised K-means clustering algorithm, another neural network is utilized to estimate the anechoic speech.
arXiv Detail & Related papers (2020-04-06T06:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.