README: Robust Error-Aware Digital Signature Framework via Deep Watermarking Model
- URL: http://arxiv.org/abs/2507.04495v1
- Date: Sun, 06 Jul 2025 18:15:53 GMT
- Title: README: Robust Error-Aware Digital Signature Framework via Deep Watermarking Model
- Authors: Hyunwook Choi, Sangyun Won, Daeyeon Hwang, Junhyeok Choi,
- Abstract summary: We propose a novel framework that enables robust, verifiable, and error-tolerant digital signatures within images.<n>The proposed framework unlocks a new class of high-assurance applications for deep watermarking.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based watermarking has emerged as a promising solution for robust image authentication and protection. However, existing models are limited by low embedding capacity and vulnerability to bit-level errors, making them unsuitable for cryptographic applications such as digital signatures, which require over 2048 bits of error-free data. In this paper, we propose README (Robust Error-Aware Digital Signature via Deep WaterMarking ModEl), a novel framework that enables robust, verifiable, and error-tolerant digital signatures within images. Our method combines a simple yet effective cropping-based capacity scaling mechanism with ERPA (ERror PAinting Module), a lightweight error correction module designed to localize and correct bit errors using Distinct Circular Subsum Sequences (DCSS). Without requiring any fine-tuning of existing pretrained watermarking models, README significantly boosts the zero-bit-error image rate (Z.B.I.R) from 1.2% to 86.3% when embedding 2048-bit digital signatures into a single image, even under real-world distortions. Moreover, our use of perceptual hash-based signature verification ensures public verifiability and robustness against tampering. The proposed framework unlocks a new class of high-assurance applications for deep watermarking, bridging the gap between signal-level watermarking and cryptographic security.
Related papers
- Hot-Swap MarkBoard: An Efficient Black-box Watermarking Approach for Large-scale Model Distribution [14.60627694687767]
We propose Hot-Swap MarkBoard, an efficient watermarking method.<n>It encodes user-specific $n$-bit binary signatures by independently embedding multiple watermarks.<n>The method supports black-box verification and is compatible with various model architectures.
arXiv Detail & Related papers (2025-07-28T09:14:21Z) - TAG-WM: Tamper-Aware Generative Image Watermarking via Diffusion Inversion Sensitivity [68.95168727940973]
Tamper-Aware Generative image WaterMarking method named TAG-WM.<n>This paper proposes a Tamper-Aware Generative image WaterMarking method named TAG-WM.
arXiv Detail & Related papers (2025-06-30T03:14:07Z) - The NeRF Signature: Codebook-Aided Watermarking for Neural Radiance Fields [77.76790894639036]
We propose NeRF Signature, a novel watermarking method for NeRF.<n>We employ a Codebook-aided Signature Embedding (CSE) that does not alter the model structure.<n>We also introduce a joint pose-patch encryption watermarking strategy to hide signatures into patches.
arXiv Detail & Related papers (2025-02-26T13:27:49Z) - RoboSignature: Robust Signature and Watermarking on Network Attacks [0.5461938536945723]
We present a novel adversarial fine-tuning attack that disrupts the model's ability to embed the intended watermark.<n>Our findings emphasize the importance of anticipating and defending against potential vulnerabilities in generative systems.
arXiv Detail & Related papers (2024-12-22T04:36:27Z) - CLUE-MARK: Watermarking Diffusion Models using CLWE [13.010337595004708]
We introduce CLUE-Mark, the first provably undetectable watermarking scheme for diffusion models.<n> CLUE-Mark requires no changes to the model being watermarked, is computationally efficient, and is guaranteed to have no impact on model output quality.<n>Uniquely, CLUE-Mark cannot be detected nor removed by recent steganographic attacks.
arXiv Detail & Related papers (2024-11-18T10:03:01Z) - AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA [67.68750063537482]
Diffusion models have achieved remarkable success in generating high-quality images.
Recent works aim to let SD models output watermarked content for post-hoc forensics.
We propose textttmethod as the first implementation under this scenario.
arXiv Detail & Related papers (2024-05-18T01:25:47Z) - Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs [23.639074918667625]
We propose a novel multi-bit box-free watermarking method for GANs with improved robustness against white-box attacks.
The watermark is embedded by adding an extra watermarking loss term during GAN training.
We show that the presence of the watermark has a negligible impact on the quality of the generated images.
arXiv Detail & Related papers (2023-10-25T18:38:10Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - Don't Forget to Sign the Gradients! [60.98885980669777]
GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
arXiv Detail & Related papers (2021-03-05T14:24:32Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.