Fragile Watermarking for Image Certification Using Deep Steganographic Embedding
- URL: http://arxiv.org/abs/2504.13759v1
- Date: Fri, 18 Apr 2025 15:51:56 GMT
- Title: Fragile Watermarking for Image Certification Using Deep Steganographic Embedding
- Authors: Davide Ghiani, Jefferson David Rodriguez Chivata, Stefano Lilliu, Simone Maurizio La Cava, Marco Micheletto, Giulia OrrĂ¹, Federico Lama, Gian Luca Marcialis,
- Abstract summary: Facial images must comply with strict standards defined by the International Civil Aviation Organization (ICAO)<n>These images may undergo unintentional degradations or malicious manipulations and deceive facial recognition systems.<n>We explore fragile watermarking, based on deep steganographic embedding, as a proactive mechanism to certify the authenticity of ICAO-compliant facial images.
- Score: 2.9310813529940645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern identity verification systems increasingly rely on facial images embedded in biometric documents such as electronic passports. To ensure global interoperability and security, these images must comply with strict standards defined by the International Civil Aviation Organization (ICAO), which specify acquisition, quality, and format requirements. However, once issued, these images may undergo unintentional degradations (e.g., compression, resizing) or malicious manipulations (e.g., morphing) and deceive facial recognition systems. In this study, we explore fragile watermarking, based on deep steganographic embedding as a proactive mechanism to certify the authenticity of ICAO-compliant facial images. By embedding a hidden image within the official photo at the time of issuance, we establish an integrity marker that becomes sensitive to any post-issuance modification. We assess how a range of image manipulations affects the recovered hidden image and show that degradation artifacts can serve as robust forensic cues. Furthermore, we propose a classification framework that analyzes the revealed content to detect and categorize the type of manipulation applied. Our experiments demonstrate high detection accuracy, including cross-method scenarios with multiple deep steganography-based models. These findings support the viability of fragile watermarking via steganographic embedding as a valuable tool for biometric document integrity verification.
Related papers
- Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible Watermark Removal [57.84348166457113]
We introduce a novel feature adapting framework that leverages the representation capacity of a pre-trained image inpainting model.<n>Our approach bridges the knowledge gap between image inpainting and watermark removal by fusing information of the residual background content beneath watermarks into the inpainting backbone model.<n>For relieving the dependence on high-quality watermark masks, we introduce a new training paradigm by utilizing coarse watermark masks to guide the inference process.
arXiv Detail & Related papers (2025-04-07T02:37:14Z) - LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks [7.965986856780787]
This paper introduces a novel training-free landmark perceptual watermark, LampMark for short.
We first analyze the structure-sensitive characteristics of Deepfake manipulations and devise a secure and confidential transformation pipeline.
We present an end-to-end watermarking framework that imperceptibly embeds and extracts watermarks concerning the images to be protected.
arXiv Detail & Related papers (2024-11-26T08:24:56Z) - Social Media Authentication and Combating Deepfakes using Semi-fragile Invisible Image Watermarking [6.246098300155482]
We propose a semi-fragile image watermarking technique that embeds an invisible secret message into real images for media authentication.
Our proposed framework is designed to be fragile to facial manipulations or tampering while being robust to benign image-processing operations and watermark removal attacks.
arXiv Detail & Related papers (2024-10-02T18:05:03Z) - Semantic Contextualization of Face Forgery: A New Definition, Dataset, and Detection Method [77.65459419417533]
We put face forgery in a semantic context and define that computational methods that alter semantic face attributes to exceed human discrimination thresholds are sources of face forgery.<n>We propose a semantics-oriented face forgery detection method that captures label relations and prioritizes the primary task (i.e., real or fake face detection)<n>We show that the proposed dataset successfully exposes the weaknesses of current detectors as the test set and consistently improves their generalizability as the training set.
arXiv Detail & Related papers (2024-05-14T10:24:19Z) - DeepFidelity: Perceptual Forgery Fidelity Assessment for Deepfake
Detection [67.3143177137102]
Deepfake detection refers to detecting artificially generated or edited faces in images or videos.
We propose a novel Deepfake detection framework named DeepFidelity to adaptively distinguish real and fake faces.
arXiv Detail & Related papers (2023-12-07T07:19:45Z) - Robust Identity Perceptual Watermark Against Deepfake Face Swapping [8.276177968730549]
Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.
We propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
arXiv Detail & Related papers (2023-11-02T16:04:32Z) - Robustness of AI-Image Detectors: Fundamental Limits and Practical
Attacks [47.04650443491879]
We analyze the robustness of various AI-image detectors including watermarking and deepfake detectors.
We show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones.
arXiv Detail & Related papers (2023-09-29T18:30:29Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - FaceSigns: Semi-Fragile Neural Watermarks for Media Authentication and
Countering Deepfakes [25.277040616599336]
Deepfakes and manipulated media are becoming a prominent threat due to the recent advances in realistic image and video synthesis techniques.
We introduce a deep learning based semi-fragile watermarking technique that allows media authentication by verifying an invisible secret message embedded in the image pixels.
arXiv Detail & Related papers (2022-04-05T03:29:30Z) - A Comparative Study of Fingerprint Image-Quality Estimation Methods [54.84936551037727]
Poor-quality images result in spurious and missing features, thus degrading the performance of the overall system.
In this work, we review existing approaches for fingerprint image-quality estimation.
We have also tested a selection of fingerprint image-quality estimation algorithms.
arXiv Detail & Related papers (2021-11-14T19:53:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.