Universal Perturbation-based Secret Key-Controlled Data Hiding
- URL: http://arxiv.org/abs/2311.01696v1
- Date: Fri, 3 Nov 2023 03:57:01 GMT
- Title: Universal Perturbation-based Secret Key-Controlled Data Hiding
- Authors: Donghua Wang, Wen Yao, Tingsong Jiang and Xiaoqian Chen
- Abstract summary: We propose a novel universal perturbation-based secret key-controlled data-hiding method.
Specifically, we optimize a single universal perturbation, which serves as a data carrier that can hide multiple secret images.
Then, we devise a secret key-controlled decoder to extract different secret images from the single container image.
- Score: 7.705884952540923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) are demonstrated to be vulnerable to universal
perturbation, a single quasi-perceptible perturbation that can deceive the DNN
on most images. However, the previous works are focused on using universal
perturbation to perform adversarial attacks, while the potential usability of
universal perturbation as data carriers in data hiding is less explored,
especially for the key-controlled data hiding method. In this paper, we propose
a novel universal perturbation-based secret key-controlled data-hiding method,
realizing data hiding with a single universal perturbation and data decoding
with the secret key-controlled decoder. Specifically, we optimize a single
universal perturbation, which serves as a data carrier that can hide multiple
secret images and be added to most cover images. Then, we devise a secret
key-controlled decoder to extract different secret images from the single
container image constructed by the universal perturbation by using different
secret keys. Moreover, a suppress loss function is proposed to prevent the
secret image from leakage. Furthermore, we adopt a robust module to boost the
decoder's capability against corruption. Finally, A co-joint optimization
strategy is proposed to find the optimal universal perturbation and decoder.
Extensive experiments are conducted on different datasets to demonstrate the
effectiveness of the proposed method. Additionally, the physical test performed
on platforms (e.g., WeChat and Twitter) verifies the usability of the proposed
method in practice.
Related papers
- Unified Steganography via Implicit Neural Representation [27.804826414990327]
We present U-INR, a novel method for steganography via Implicit Neural Representation (INR)<n>To achieve this idea, a private key is shared between the data sender and receivers. Such a private key can be used to determine the position of secret data in INR networks.<n> Comprehensive experiments across multiple data types, including images, videos, audio, and SDF and NeRF, demonstrate the generalizability and effectiveness of U-INR.
arXiv Detail & Related papers (2025-05-03T08:57:06Z) - Enhancing Privacy in Semantic Communication over Wiretap Channels leveraging Differential Privacy [51.028047763426265]
Semantic communication (SemCom) improves transmission efficiency by focusing on task-relevant information.
transmitting semantic-rich data over insecure channels introduces privacy risks.
This paper proposes a novel SemCom framework that integrates differential privacy mechanisms to protect sensitive semantic features.
arXiv Detail & Related papers (2025-04-23T08:42:44Z) - DDIM-Driven Coverless Steganography Scheme with Real Key [0.8892527836401771]
steganography embeds secret information into images by exploiting their redundancy.
In this work, we leverage the Denoising Diffusion Implicit Model (DDIM) to generate high-quality stego-images.
Our method offers low-image-correlation real-key protection by incorporating chaotic encryption.
arXiv Detail & Related papers (2024-11-10T14:59:29Z) - Cover-separable Fixed Neural Network Steganography via Deep Generative Models [37.08937194546323]
We propose a Cover-separable Fixed Neural Network Steganography, namely Cs-FNNS.
In Cs-FNNS, we propose a Steganographic Perturbation Search (SPS) algorithm to directly encode the secret data into an imperceptible perturbation.
We demonstrate the superior performance of the proposed method in terms of visual quality and undetectability.
arXiv Detail & Related papers (2024-07-16T05:47:06Z) - Information hiding cameras: optical concealment of object information
into ordinary images [11.41487037469984]
We introduce an optical information hiding camera integrated with an electronic decoder, jointly optimized through deep learning.
This information hiding-decoding system employs a diffractive optical processor as its front-end, which transforms and hides input images in the form of ordinary-looking patterns that deceive/mislead human observers.
By processing these ordinary-looking output images, a jointly-trained electronic decoder neural network accurately reconstructs the original information hidden within the deceptive output pattern.
arXiv Detail & Related papers (2024-01-15T17:37:27Z) - DP-DCAN: Differentially Private Deep Contrastive Autoencoder Network for Single-cell Clustering [29.96339380816541]
Deep learning models may leak sensitive information about users.
Differential Privacy (DP) is increasingly used to protect privacy.
In this paper, we take advantage of the uniqueness of the autoencoder that it outputs only the dimension-reduced vector in the middle of the network.
We design a Differentially Private Deep Contrastive Autoencoder Network (DP-DCAN) by partial network perturbation for single-cell clustering.
arXiv Detail & Related papers (2023-11-06T05:13:29Z) - Secure Deep-JSCC Against Multiple Eavesdroppers [13.422085141752468]
We propose an end-to-end (E2E) learning-based approach for secure communication against multiple eavesdroppers.
We implement deep neural networks (DNNs) to realize a data-driven secure communication scheme.
Our experiments show that employing the proposed secure neural encoding can decrease the adversarial accuracy by 28%.
arXiv Detail & Related papers (2023-08-05T14:40:35Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - Reducing Redundancy in the Bottleneck Representation of the Autoencoders [98.78384185493624]
Autoencoders are a type of unsupervised neural networks, which can be used to solve various tasks.
We propose a scheme to explicitly penalize feature redundancies in the bottleneck representation.
We tested our approach across different tasks: dimensionality reduction using three different dataset, image compression using the MNIST dataset, and image denoising using fashion MNIST.
arXiv Detail & Related papers (2022-02-09T18:48:02Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.