Hiding Functions within Functions: Steganography by Implicit Neural Representations
- URL: http://arxiv.org/abs/2312.04743v1
- Date: Thu, 7 Dec 2023 22:55:48 GMT
- Title: Hiding Functions within Functions: Steganography by Implicit Neural Representations
- Authors: Jia Liu, Peng Luo, Yan Ke,
- Abstract summary: We propose StegaINR to implement steganography.
StegaINR embeds a secret function into a stego function, which serves as both the message extractor and the stego media.
To our knowledge, this is the first work to introduce INR into steganography.
- Score: 9.630341407412729
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep steganography utilizes the powerful capabilities of deep neural networks to embed and extract messages, but its reliance on an additional message extractor limits its practical use due to the added suspicion it can raise from steganalyzers. To address this problem, we propose StegaINR, which utilizes Implicit Neural Representation (INR) to implement steganography. StegaINR embeds a secret function into a stego function, which serves as both the message extractor and the stego media for secure transmission on a public channel. Recipients need only use a shared key to recover the secret function from the stego function, allowing them to obtain the secret message. Our approach makes use of continuous functions, enabling it to handle various types of messages. To our knowledge, this is the first work to introduce INR into steganography. We performed evaluations on image and climate data to test our method in different deployment contexts.
Related papers
- Cover-separable Fixed Neural Network Steganography via Deep Generative Models [37.08937194546323]
We propose a Cover-separable Fixed Neural Network Steganography, namely Cs-FNNS.
In Cs-FNNS, we propose a Steganographic Perturbation Search (SPS) algorithm to directly encode the secret data into an imperceptible perturbation.
We demonstrate the superior performance of the proposed method in terms of visual quality and undetectability.
arXiv Detail & Related papers (2024-07-16T05:47:06Z) - DiffStega: Towards Universal Training-Free Coverless Image Steganography with Diffusion Models [38.17146643777956]
Coverless image steganography (CIS) enhances imperceptibility by not using any cover image.
Recent works have utilized text prompts as keys in CIS through diffusion models.
We propose DiffStega, an innovative training-free diffusion-based CIS strategy for universal application.
arXiv Detail & Related papers (2024-07-15T06:15:49Z) - FoC: Figure out the Cryptographic Functions in Stripped Binaries with LLMs [54.27040631527217]
We propose a novel framework called FoC to Figure out the Cryptographic functions in stripped binaries.
FoC-BinLLM outperforms ChatGPT by 14.61% on the ROUGE-L score.
FoC-Sim outperforms the previous best methods with a 52% higher Recall@1.
arXiv Detail & Related papers (2024-03-27T09:45:33Z) - Steganography for Neural Radiance Fields by Backdooring [6.29495604869364]
We propose a novel model steganography scheme with implicit neural representation.
The NeRF model generates a secret viewpoint image, which serves as a backdoor.
We train a message extractor using overfitting to establish a one-to-one mapping between the secret message and the secret viewpoint image.
arXiv Detail & Related papers (2023-09-19T10:27:38Z) - Exploring Incompatible Knowledge Transfer in Few-shot Image Generation [107.81232567861117]
Few-shot image generation learns to generate diverse and high-fidelity images from a target domain using a few reference samples.
Existing F SIG methods select, preserve and transfer prior knowledge from a source generator to learn the target generator.
We propose knowledge truncation, which is a complementary operation to knowledge preservation and is implemented by a lightweight pruning-based method.
arXiv Detail & Related papers (2023-04-15T14:57:15Z) - Perfectly Secure Steganography Using Minimum Entropy Coupling [60.154855689780796]
We show that a steganography procedure is perfectly secure under Cachin 1998's information-theoretic model of steganography.
We also show that, among perfectly secure procedures, a procedure maximizes information throughput if and only if it is induced by a minimum entropy coupling.
arXiv Detail & Related papers (2022-10-24T17:40:07Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - Deniable Steganography [30.729865153060985]
Steganography conceals the secret message into the cover media, generating a stego media which can be transmitted on public channels without drawing suspicion.
As its countermeasure, steganalysis mainly aims to detect whether the secret message is hidden in a given media.
We propose a receiver-deniable steganographic scheme to deal with the receiver-side coercive attack using deep neural networks (DNN)
arXiv Detail & Related papers (2022-05-25T09:00:30Z) - Intrinsic Probing through Dimension Selection [69.52439198455438]
Most modern NLP systems make use of pre-trained contextual representations that attain astonishingly high performance on a variety of tasks.
Such high performance should not be possible unless some form of linguistic structure inheres in these representations, and a wealth of research has sprung up on probing for it.
In this paper, we draw a distinction between intrinsic probing, which examines how linguistic information is structured within a representation, and the extrinsic probing popular in prior work, which only argues for the presence of such information by showing that it can be successfully extracted.
arXiv Detail & Related papers (2020-10-06T15:21:08Z) - Near-imperceptible Neural Linguistic Steganography via Self-Adjusting
Arithmetic Coding [88.31226340759892]
We present a new linguistic steganography method which encodes secret messages using self-adjusting arithmetic coding based on a neural language model.
Human evaluations show that 51% of generated cover texts can indeed fool eavesdroppers.
arXiv Detail & Related papers (2020-10-01T20:40:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.