Deep Cross-Modal Steganography Using Neural Representations
- URL: http://arxiv.org/abs/2307.08671v2
- Date: Tue, 18 Jul 2023 08:12:14 GMT
- Title: Deep Cross-Modal Steganography Using Neural Representations
- Authors: Gyojin Han, Dong-Jae Lee, Jiwan Hur, Jaehyun Choi, Junmo Kim
- Abstract summary: We propose a cross-modal steganography framework using Implicit Neural Representations (INRs) to hide secret data in cover images.
The proposed framework employs INRs to represent the secret data, which can handle data of various modalities and resolutions.
- Score: 24.16485513152904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Steganography is the process of embedding secret data into another message or
data, in such a way that it is not easily noticeable. With the advancement of
deep learning, Deep Neural Networks (DNNs) have recently been utilized in
steganography. However, existing deep steganography techniques are limited in
scope, as they focus on specific data types and are not effective for
cross-modal steganography. Therefore, We propose a deep cross-modal
steganography framework using Implicit Neural Representations (INRs) to hide
secret data of various formats in cover images. The proposed framework employs
INRs to represent the secret data, which can handle data of various modalities
and resolutions. Experiments on various secret datasets of diverse types
demonstrate that the proposed approach is expandable and capable of
accommodating different modalities.
Related papers
- Cover-separable Fixed Neural Network Steganography via Deep Generative Models [37.08937194546323]
We propose a Cover-separable Fixed Neural Network Steganography, namely Cs-FNNS.
In Cs-FNNS, we propose a Steganographic Perturbation Search (SPS) algorithm to directly encode the secret data into an imperceptible perturbation.
We demonstrate the superior performance of the proposed method in terms of visual quality and undetectability.
arXiv Detail & Related papers (2024-07-16T05:47:06Z) - Flexible Cross-Modal Steganography via Implicit Representations [41.777197453697056]
Our framework is considered for effectively hiding multiple data without altering the original INR ensuring high-quality stego data.
Our framework can perform cross-modal steganography for various modalities including image, audio, video, and 3D shapes.
arXiv Detail & Related papers (2023-12-09T07:51:01Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Steganography of Steganographic Networks [23.85364443400414]
Steganography is a technique for covert communication between two parties.
We propose a novel scheme for steganography of steganographic networks in this paper.
arXiv Detail & Related papers (2023-02-28T12:27:34Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - Convolutional Learning on Multigraphs [153.20329791008095]
We develop convolutional information processing on multigraphs and introduce convolutional multigraph neural networks (MGNNs)
To capture the complex dynamics of information diffusion within and across each of the multigraph's classes of edges, we formalize a convolutional signal processing model.
We develop a multigraph learning architecture, including a sampling procedure to reduce computational complexity.
The introduced architecture is applied towards optimal wireless resource allocation and a hate speech localization task, offering improved performance over traditional graph neural networks.
arXiv Detail & Related papers (2022-09-23T00:33:04Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - A Multiscale Graph Convolutional Network for Change Detection in
Homogeneous and Heterogeneous Remote Sensing Images [12.823633963080281]
Change detection (CD) in remote sensing images has been an ever-expanding area of research.
In this paper, a novel CD method based on the graph convolutional network (GCN) and multiscale object-based technique is proposed for both homogeneous and heterogeneous images.
arXiv Detail & Related papers (2021-02-16T09:26:31Z) - Multi-Image Steganography Using Deep Neural Networks [9.722040907570072]
Steganography is the science of hiding a secret message within an ordinary public message.
We aim to utilize deep neural networks for the encoding and decoding of multiple secret images inside a single cover image of the same resolution.
arXiv Detail & Related papers (2021-01-02T01:51:38Z) - Robust Data Hiding Using Inverse Gradient Attention [82.73143630466629]
In the data hiding task, each pixel of cover images should be treated differently since they have divergent tolerabilities.
We propose a novel deep data hiding scheme with Inverse Gradient Attention (IGA), combing the ideas of adversarial learning and attention mechanism.
Empirically, extensive experiments show that the proposed model outperforms the state-of-the-art methods on two prevalent datasets.
arXiv Detail & Related papers (2020-11-21T19:08:23Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.