FontGuard: A Robust Font Watermarking Approach Leveraging Deep Font Knowledge
- URL: http://arxiv.org/abs/2504.03128v1
- Date: Fri, 04 Apr 2025 02:39:33 GMT
- Title: FontGuard: A Robust Font Watermarking Approach Leveraging Deep Font Knowledge
- Authors: Kahim Wong, Jicheng Zhou, Kemou Li, Yain-Whar Si, Xiaowei Wu, Jiantao Zhou,
- Abstract summary: We introduce FontGuard, a novel font watermarking model that harnesses the capabilities of font models and language-guided contrastive learning.<n>FontGuard modifies fonts by altering hidden style features, resulting in better font quality upon watermark embedding.<n>In the decoder, we employ an image-text contrastive learning to reconstruct the embedded bits, which can achieve desirable robustness against various real-world transmission distortions.
- Score: 14.545769739571291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of AI-generated content brings significant concerns on the forensic and security issues such as source tracing, copyright protection, etc, highlighting the need for effective watermarking technologies. Font-based text watermarking has emerged as an effective solution to embed information, which could ensure copyright, traceability, and compliance of the generated text content. Existing font watermarking methods usually neglect essential font knowledge, which leads to watermarked fonts of low quality and limited embedding capacity. These methods are also vulnerable to real-world distortions, low-resolution fonts, and inaccurate character segmentation. In this paper, we introduce FontGuard, a novel font watermarking model that harnesses the capabilities of font models and language-guided contrastive learning. Unlike previous methods that focus solely on the pixel-level alteration, FontGuard modifies fonts by altering hidden style features, resulting in better font quality upon watermark embedding. We also leverage the font manifold to increase the embedding capacity of our proposed method by generating substantial font variants closely resembling the original font. Furthermore, in the decoder, we employ an image-text contrastive learning to reconstruct the embedded bits, which can achieve desirable robustness against various real-world transmission distortions. FontGuard outperforms state-of-the-art methods by +5.4%, +7.4%, and +5.8% in decoding accuracy under synthetic, cross-media, and online social network distortions, respectively, while improving the visual quality by 52.7% in terms of LPIPS. Moreover, FontGuard uniquely allows the generation of watermarked fonts for unseen fonts without re-training the network. The code and dataset are available at https://github.com/KAHIMWONG/FontGuard.
Related papers
- Improving the Generation Quality of Watermarked Large Language Models
via Word Importance Scoring [81.62249424226084]
Token-level watermarking inserts watermarks in the generated texts by altering the token probability distributions.
This watermarking algorithm alters the logits during generation, which can lead to a downgraded text quality.
We propose to improve the quality of texts generated by a watermarked language model by Watermarking with Importance Scoring (WIS)
arXiv Detail & Related papers (2023-11-16T08:36:00Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - VQ-Font: Few-Shot Font Generation with Structure-Aware Enhancement and
Quantization [52.870638830417]
We propose a VQGAN-based framework (i.e., VQ-Font) to enhance glyph fidelity through token prior refinement and structure-aware enhancement.
Specifically, we pre-train a VQGAN to encapsulate font token prior within a codebook. Subsequently, VQ-Font refines the synthesized glyphs with the codebook to eliminate the domain gap between synthesized and real-world strokes.
arXiv Detail & Related papers (2023-08-27T06:32:20Z) - Robust Multi-bit Natural Language Watermarking through Invariant
Features [28.4935678626116]
Original natural language contents are susceptible to illegal piracy and potential misuse.
To effectively combat piracy and protect copyrights, a multi-bit watermarking framework should be able to embed adequate bits of information.
In this work, we explore ways to advance both payload and robustness by following a well-known proposition from image watermarking.
arXiv Detail & Related papers (2023-05-03T05:37:30Z) - CF-Font: Content Fusion for Few-shot Font Generation [63.79915037830131]
We propose a content fusion module (CFM) to project the content feature into a linear space defined by the content features of basis fonts.
Our method also allows to optimize the style representation vector of reference images.
We have evaluated our method on a dataset of 300 fonts with 6.5k characters each.
arXiv Detail & Related papers (2023-03-24T14:18:40Z) - Diff-Font: Diffusion Model for Robust One-Shot Font Generation [110.45944936952309]
We propose a novel one-shot font generation method based on a diffusion model, named Diff-Font.
The proposed model aims to generate the entire font library by giving only one sample as the reference.
The well-trained Diff-Font is not only robust to font gap and font variation, but also achieved promising performance on difficult character generation.
arXiv Detail & Related papers (2022-12-12T13:51:50Z) - Font Representation Learning via Paired-glyph Matching [15.358456947574913]
We propose a novel font representation learning scheme to embed font styles into the latent space.
For the discriminative representation of a font from others, we propose a paired-glyph matching-based font representation learning model.
We show our font representation learning scheme achieves better generalization performance than the existing font representation learning techniques.
arXiv Detail & Related papers (2022-11-20T12:27:27Z) - FONTNET: On-Device Font Understanding and Prediction Pipeline [1.5749416770494706]
We propose two engines: Font Detection Engine and Font Prediction Engine.
We develop a novel CNN architecture for identifying font style of text in images.
Second, we designed a novel algorithm for predicting similar fonts for a given query font.
Third, we have optimized and deployed the entire engine On-Device which ensures privacy and improves latency in real time applications such as instant messaging.
arXiv Detail & Related papers (2021-03-30T08:11:24Z) - Few-shot Compositional Font Generation with Dual Memory [16.967987801167514]
We propose a novel font generation framework, named Dual Memory-augmented Font Generation Network (DM-Font)
We employ memory components and global-context awareness in the generator to take advantage of the compositionality.
In the experiments on Korean-handwriting fonts and Thai-printing fonts, we observe that our method generates a significantly better quality of samples with faithful stylization.
arXiv Detail & Related papers (2020-05-21T08:13:40Z) - Attribute2Font: Creating Fonts You Want From Attributes [32.82714291856353]
Attribute2Font is trained to perform font style transfer between any two fonts conditioned on their attribute values.
A novel unit named Attribute Attention Module is designed to make those generated glyph images better embody the prominent font attributes.
arXiv Detail & Related papers (2020-05-16T04:06:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.