Graph-Stega: Semantic Controllable Steganographic Text Generation Guided
by Knowledge Graph
- URL: http://arxiv.org/abs/2006.08339v1
- Date: Tue, 2 Jun 2020 06:53:21 GMT
- Title: Graph-Stega: Semantic Controllable Steganographic Text Generation Guided
by Knowledge Graph
- Authors: Zhongliang Yang, Baitao Gong, Yamin Li, Jinshuai Yang, Zhiwen Hu,
Yongfeng Huang
- Abstract summary: This paper proposes a new text generative steganography method which is quietly different from the existing models.
We use a Knowledge Graph (KG) to guide the generation of steganographic sentences.
The experimental results show that the proposed model can guarantee both the quality of the generated text and its semantic expression.
- Score: 29.189037080306353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most of the existing text generative steganographic methods are based on
coding the conditional probability distribution of each word during the
generation process, and then selecting specific words according to the secret
information, so as to achieve information hiding. Such methods have their
limitations which may bring potential security risks. Firstly, with the
increase of embedding rate, these models will choose words with lower
conditional probability, which will reduce the quality of the generated
steganographic texts; secondly, they can not control the semantic expression of
the final generated steganographic text. This paper proposes a new text
generative steganography method which is quietly different from the existing
models. We use a Knowledge Graph (KG) to guide the generation of steganographic
sentences. On the one hand, we hide the secret information by coding the path
in the knowledge graph, but not the conditional probability of each generated
word; on the other hand, we can control the semantic expression of the
generated steganographic text to a certain extent. The experimental results
show that the proposed model can guarantee both the quality of the generated
text and its semantic expression, which is a supplement and improvement to the
current text generation steganography.
Related papers
- Zero-shot Generative Linguistic Steganography [31.19052670719132]
We propose a novel zero-shot approach based on in-context learning for linguistic steganography to achieve better perceptual and statistical imperceptibility.
Our experimental results indicate that our method produces $1.926times$ more innocent and intelligible stegotext than any other method.
arXiv Detail & Related papers (2024-03-16T08:31:25Z) - Generating Faithful Text From a Knowledge Graph with Noisy Reference
Text [26.6775578332187]
We develop a KG-to-text generation model that can generate faithful natural-language text from a given graph.
Our framework incorporates two core ideas: Firstly, we utilize contrastive learning to enhance the model's ability to differentiate between faithful and hallucinated information in the text.
Secondly, we empower the decoder to control the level of hallucination in the generated text by employing a controllable text generation technique.
arXiv Detail & Related papers (2023-08-12T07:12:45Z) - GlyphDiffusion: Text Generation as Image Generation [100.98428068214736]
We propose GlyphDiffusion, a novel diffusion approach for text generation via text-guided image generation.
Our key idea is to render the target text as a glyph image containing visual language content.
Our model also makes significant improvements compared to the recent diffusion model.
arXiv Detail & Related papers (2023-04-25T02:14:44Z) - SpaText: Spatio-Textual Representation for Controllable Image Generation [61.89548017729586]
SpaText is a new method for text-to-image generation using open-vocabulary scene control.
In addition to a global text prompt that describes the entire scene, the user provides a segmentation map.
We show its effectiveness on two state-of-the-art diffusion models: pixel-based and latent-conditional-based.
arXiv Detail & Related papers (2022-11-25T18:59:10Z) - Bootstrapping Text Anonymization Models with Distant Supervision [2.121963121603413]
We propose a novel method to bootstrap text anonymization models based on distant supervision.
Instead of requiring manually labeled training data, the approach relies on a knowledge graph expressing the background information assumed to be publicly available.
arXiv Detail & Related papers (2022-05-13T21:10:14Z) - Hierarchical Heterogeneous Graph Representation Learning for Short Text
Classification [60.233529926965836]
We propose a new method called SHINE, which is based on graph neural network (GNN) for short text classification.
First, we model the short text dataset as a hierarchical heterogeneous graph consisting of word-level component graphs.
Then, we dynamically learn a short document graph that facilitates effective label propagation among similar short texts.
arXiv Detail & Related papers (2021-10-30T05:33:05Z) - A Plug-and-Play Method for Controlled Text Generation [38.283313068622085]
We present a plug-and-play decoding method for controlled language generation that is so simple and intuitive, it can be described in a single sentence.
Despite the simplicity of this approach, we see it works incredibly well in practice.
arXiv Detail & Related papers (2021-09-20T17:27:03Z) - Learning to Generate Scene Graph from Natural Language Supervision [52.18175340725455]
We propose one of the first methods that learn from image-sentence pairs to extract a graphical representation of localized objects and their relationships within an image, known as scene graph.
We leverage an off-the-shelf object detector to identify and localize object instances, match labels of detected regions to concepts parsed from captions, and thus create "pseudo" labels for learning scene graph.
arXiv Detail & Related papers (2021-09-06T03:38:52Z) - Provably Secure Generative Linguistic Steganography [29.919406917681282]
We present a novel provably secure generative linguistic steganographic method ADG.
ADG embeds secret information by Adaptive Dynamic Grouping of tokens according to their probability given by an off-the-shelf language model.
arXiv Detail & Related papers (2021-06-03T17:27:10Z) - Near-imperceptible Neural Linguistic Steganography via Self-Adjusting
Arithmetic Coding [88.31226340759892]
We present a new linguistic steganography method which encodes secret messages using self-adjusting arithmetic coding based on a neural language model.
Human evaluations show that 51% of generated cover texts can indeed fool eavesdroppers.
arXiv Detail & Related papers (2020-10-01T20:40:23Z) - Improving Disentangled Text Representation Learning with
Information-Theoretic Guidance [99.68851329919858]
discrete nature of natural language makes disentangling of textual representations more challenging.
Inspired by information theory, we propose a novel method that effectively manifests disentangled representations of text.
Experiments on both conditional text generation and text-style transfer demonstrate the high quality of our disentangled representation.
arXiv Detail & Related papers (2020-06-01T03:36:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.