Green Steganalyzer: A Green Learning Approach to Image Steganalysis
- URL: http://arxiv.org/abs/2306.04008v1
- Date: Tue, 6 Jun 2023 20:43:07 GMT
- Title: Green Steganalyzer: A Green Learning Approach to Image Steganalysis
- Authors: Yao Zhu, Xinyu Wang, Hong-Shuo Chen, Ronald Salloum, C.-C. Jay Kuo
- Abstract summary: Green Steganalyzer (GS) is a learning solution to image steganalysis based on the green learning paradigm.
GS consists of three modules: pixel-based anomaly prediction, 2) embedding location detection, and 3) decision fusion for image-level detection.
- Score: 30.486433532000344
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: A novel learning solution to image steganalysis based on the green learning
paradigm, called Green Steganalyzer (GS), is proposed in this work. GS consists
of three modules: 1) pixel-based anomaly prediction, 2) embedding location
detection, and 3) decision fusion for image-level detection. In the first
module, GS decomposes an image into patches, adopts Saab transforms for feature
extraction, and conducts self-supervised learning to predict an anomaly score
of their center pixel. In the second module, GS analyzes the anomaly scores of
a pixel and its neighborhood to find pixels of higher embedding probabilities.
In the third module, GS focuses on pixels of higher embedding probabilities and
fuses their anomaly scores to make final image-level classification. Compared
with state-of-the-art deep-learning models, GS achieves comparable detection
performance against S-UNIWARD, WOW and HILL steganography schemes with
significantly lower computational complexity and a smaller model size, making
it attractive for mobile/edge applications. Furthermore, GS is mathematically
transparent because of its modular design.
Related papers
- GS2Pose: Two-stage 6D Object Pose Estimation Guided by Gaussian Splatting [4.465134753953128]
This paper proposes a new method for accurate and robust 6D pose estimation of novel objects, named GS2Pose.
By introducing 3D Gaussian splatting, GS2Pose can utilize the reconstruction results without requiring a high-quality CAD model.
The code for GS2Pose will soon be released on GitHub.
arXiv Detail & Related papers (2024-11-06T10:07:46Z) - GS-Net: Generalizable Plug-and-Play 3D Gaussian Splatting Module [19.97023389064118]
We propose GS-Net, a plug-and-play 3DGS module that densifies Gaussian ellipsoids from sparse SfM point clouds.
Experiments demonstrate that applying GS-Net to 3DGS yields a PSNR improvement of 2.08 dB for conventional viewpoints and 1.86 dB for novel viewpoints.
arXiv Detail & Related papers (2024-09-17T16:03:19Z) - ShapeSplat: A Large-scale Dataset of Gaussian Splats and Their Self-Supervised Pretraining [104.34751911174196]
We build a large-scale dataset of 3DGS using ShapeNet and ModelNet datasets.
Our dataset ShapeSplat consists of 65K objects from 87 unique categories.
We introduce textbftextitGaussian-MAE, which highlights the unique benefits of representation learning from Gaussian parameters.
arXiv Detail & Related papers (2024-08-20T14:49:14Z) - Superpixel Graph Contrastive Clustering with Semantic-Invariant
Augmentations for Hyperspectral Images [64.72242126879503]
Hyperspectral images (HSI) clustering is an important but challenging task.
We first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI.
We then design a superpixel graph contrastive clustering model to learn discriminative superpixel representations.
arXiv Detail & Related papers (2024-03-04T07:40:55Z) - GS-IR: 3D Gaussian Splatting for Inverse Rendering [71.14234327414086]
We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS)
We extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions.
The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering.
arXiv Detail & Related papers (2023-11-26T02:35:09Z) - SSG2: A new modelling paradigm for semantic segmentation [0.0]
State-of-the-art models in semantic segmentation operate on single, static images, generating corresponding segmentation masks.
Inspired by work on semantic change detection, we introduce a methodology that leverages a sequence of observables generated for each static input image.
By adding this "temporal" dimension, we exploit strong signal correlations between successive observations in the sequence to reduce error rates.
We evaluate SSG2 across three diverse datasets: UrbanMonitor, featuring orthoimage tiles from Darwin, Australia with five spectral bands and 0.2m spatial resolution; ISPRS Potsdam, which includes true orthophoto images with multiple spectral bands and a 5cm ground sampling
arXiv Detail & Related papers (2023-10-12T19:08:03Z) - Hierarchical Transformer for Survival Prediction Using Multimodality
Whole Slide Images and Genomics [63.76637479503006]
Learning good representation of giga-pixel level whole slide pathology images (WSI) for downstream tasks is critical.
This paper proposes a hierarchical-based multimodal transformer framework that learns a hierarchical mapping between pathology images and corresponding genes.
Our architecture requires fewer GPU resources compared with benchmark methods while maintaining better WSI representation ability.
arXiv Detail & Related papers (2022-11-29T23:47:56Z) - A-PixelHop: A Green, Robust and Explainable Fake-Image Detector [27.34087987867584]
A novel method for detecting CNN-generated images, called Attentive PixelHop (or A-PixelHop), is proposed in this work.
It has three advantages: 1) low computational complexity and a small model size, 2) high detection performance against a wide range of generative models, and 3) mathematical transparency.
arXiv Detail & Related papers (2021-11-07T06:31:26Z) - Self-supervised Geometric Perception [96.89966337518854]
Self-supervised geometric perception is a framework to learn a feature descriptor for correspondence matching without any ground-truth geometric model labels.
We show that SGP achieves state-of-the-art performance that is on-par or superior to the supervised oracles trained using ground-truth labels.
arXiv Detail & Related papers (2021-03-04T15:34:43Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.