MCTSteg: A Monte Carlo Tree Search-based Reinforcement Learning
Framework for Universal Non-additive Steganography
- URL: http://arxiv.org/abs/2103.13689v1
- Date: Thu, 25 Mar 2021 09:12:08 GMT
- Title: MCTSteg: A Monte Carlo Tree Search-based Reinforcement Learning
Framework for Universal Non-additive Steganography
- Authors: Xianbo Mo and Shunquan Tan and Bin Li and Jiwu Huang
- Abstract summary: We propose an automatic non-additive steganographic distortion learning framework called MCTSteg.
Due to its self-learning characteristic and domain-independent reward function, MCTSteg has become the first reported universal non-additive steganographic framework.
- Score: 40.622844703837046
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research has shown that non-additive image steganographic frameworks
effectively improve security performance through adjusting distortion
distribution. However, as far as we know, all of the existing non-additive
proposals are based on handcrafted policies, and can only be applied to a
specific image domain, which heavily prevent non-additive steganography from
releasing its full potentiality. In this paper, we propose an automatic
non-additive steganographic distortion learning framework called MCTSteg to
remove the above restrictions. Guided by the reinforcement learning paradigm,
we combine Monte Carlo Tree Search (MCTS) and steganalyzer-based environmental
model to build MCTSteg. MCTS makes sequential decisions to adjust distortion
distribution without human intervention. Our proposed environmental model is
used to obtain feedbacks from each decision. Due to its self-learning
characteristic and domain-independent reward function, MCTSteg has become the
first reported universal non-additive steganographic framework which can work
in both spatial and JPEG domains. Extensive experimental results show that
MCTSteg can effectively withstand the detection of both hand-crafted
feature-based and deep-learning-based steganalyzers. In both spatial and JPEG
domains, the security performance of MCTSteg steadily outperforms the state of
the art by a clear margin under different scenarios.
Related papers
- Inter- and intra-uncertainty based feature aggregation model for semi-supervised histopathology image segmentation [21.973620376753594]
hierarchical prediction uncertainty within the student model (intra-uncertainty) and image prediction uncertainty (inter-uncertainty) have not been fully utilized by existing methods.
We propose a novel inter- and intra-uncertainty regularization method to measure and constrain both inter- and intra-inconsistencies in the teacher-student architecture.
We also propose a new two-stage network with pseudo-mask guided feature aggregation (PG-FANet) as the segmentation model.
arXiv Detail & Related papers (2024-03-19T14:32:21Z) - Progressive Feature Self-reinforcement for Weakly Supervised Semantic
Segmentation [55.69128107473125]
We propose a single-stage approach for Weakly Supervised Semantic (WSSS) with image-level labels.
We adaptively partition the image content into deterministic regions (e.g., confident foreground and background) and uncertain regions (e.g., object boundaries and misclassified categories) for separate processing.
Building upon this, we introduce a complementary self-enhancement method that constrains the semantic consistency between these confident regions and an augmented image with the same class labels.
arXiv Detail & Related papers (2023-12-14T13:21:52Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - JPEG Steganalysis Based on Steganographic Feature Enhancement and Graph
Attention Learning [15.652077779677091]
We introduce a novel representation learning algorithm for JPEG steganalysis.
The graph attention learning module is designed to avoid global feature loss caused by the local feature learning of convolutional neural network.
The feature enhancement module is applied to prevent the stacking of convolutional layers from weakening the steganographic information.
arXiv Detail & Related papers (2023-02-05T01:42:19Z) - USER: Unified Semantic Enhancement with Momentum Contrast for Image-Text
Retrieval [115.28586222748478]
Image-Text Retrieval (ITR) aims at searching for the target instances that are semantically relevant to the given query from the other modality.
Existing approaches typically suffer from two major limitations.
arXiv Detail & Related papers (2023-01-17T12:42:58Z) - Self-supervised Correlation Mining Network for Person Image Generation [9.505343361614928]
Person image generation aims to perform non-rigid deformation on source images.
We propose a Self-supervised Correlation Mining Network (SCM-Net) to rearrange the source images in the feature space.
For improving the fidelity of cross-scale pose transformation, we propose a graph based Body Structure Retaining Loss.
arXiv Detail & Related papers (2021-11-26T03:57:46Z) - FREE: Feature Refinement for Generalized Zero-Shot Learning [86.41074134041394]
Generalized zero-shot learning (GZSL) has achieved significant progress, with many efforts dedicated to overcoming the problems of visual-semantic domain gap and seen-unseen bias.
Most existing methods directly use feature extraction models trained on ImageNet alone, ignoring the cross-dataset bias between ImageNet and GZSL benchmarks.
We propose a simple yet effective GZSL method, termed feature refinement for generalized zero-shot learning (FREE) to tackle the above problem.
arXiv Detail & Related papers (2021-07-29T08:11:01Z) - Generative Self-training for Cross-domain Unsupervised Tagged-to-Cine
MRI Synthesis [10.636015177721635]
We propose a novel generative self-training framework with continuous value prediction and regression objective for cross-domain image synthesis.
Specifically, we propose to filter the pseudo-label with an uncertainty mask, and quantify the predictive confidence of generated images with practical variational Bayes learning.
arXiv Detail & Related papers (2021-06-23T16:19:00Z) - Self-supervised Equivariant Attention Mechanism for Weakly Supervised
Semantic Segmentation [93.83369981759996]
We propose a self-supervised equivariant attention mechanism (SEAM) to discover additional supervision and narrow the gap.
Our method is based on the observation that equivariance is an implicit constraint in fully supervised semantic segmentation.
We propose consistency regularization on predicted CAMs from various transformed images to provide self-supervision for network learning.
arXiv Detail & Related papers (2020-04-09T14:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.