SepMark: Deep Separable Watermarking for Unified Source Tracing and
Deepfake Detection
- URL: http://arxiv.org/abs/2305.06321v2
- Date: Tue, 1 Aug 2023 12:57:14 GMT
- Title: SepMark: Deep Separable Watermarking for Unified Source Tracing and
Deepfake Detection
- Authors: Xiaoshuai Wu, Xin Liao, Bo Ou
- Abstract summary: Malicious Deepfakes have led to a sharp conflict over distinguishing between genuine and forged faces.
We propose SepMark, which provides a unified framework for source tracing and Deepfake detection.
- Score: 15.54035395750232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Malicious Deepfakes have led to a sharp conflict over distinguishing between
genuine and forged faces. Although many countermeasures have been developed to
detect Deepfakes ex-post, undoubtedly, passive forensics has not considered any
preventive measures for the pristine face before foreseeable manipulations. To
complete this forensics ecosystem, we thus put forward the proactive solution
dubbed SepMark, which provides a unified framework for source tracing and
Deepfake detection. SepMark originates from encoder-decoder-based deep
watermarking but with two separable decoders. For the first time the deep
separable watermarking, SepMark brings a new paradigm to the established study
of deep watermarking, where a single encoder embeds one watermark elegantly,
while two decoders can extract the watermark separately at different levels of
robustness. The robust decoder termed Tracer that resists various distortions
may have an overly high level of robustness, allowing the watermark to survive
both before and after Deepfake. The semi-robust one termed Detector is
selectively sensitive to malicious distortions, making the watermark disappear
after Deepfake. Only SepMark comprising of Tracer and Detector can reliably
trace the trusted source of the marked face and detect whether it has been
altered since being marked; neither of the two alone can achieve this.
Extensive experiments demonstrate the effectiveness of the proposed SepMark on
typical Deepfakes, including face swapping, expression reenactment, and
attribute editing.
Related papers
- LampMark: Proactive Deepfake Detection via Training-Free Landmark Perceptual Watermarks [7.965986856780787]
This paper introduces a novel training-free landmark perceptual watermark, LampMark for short.
We first analyze the structure-sensitive characteristics of Deepfake manipulations and devise a secure and confidential transformation pipeline.
We present an end-to-end watermarking framework that imperceptibly embeds and extracts watermarks concerning the images to be protected.
arXiv Detail & Related papers (2024-11-26T08:24:56Z) - An undetectable watermark for generative image models [65.31658824274894]
We present the first undetectable watermarking scheme for generative image models.
In particular, an undetectable watermark does not degrade image quality under any efficiently computable metric.
Our scheme works by selecting the initial latents of a diffusion model using a pseudorandom error-correcting code.
arXiv Detail & Related papers (2024-10-09T18:33:06Z) - Are Watermarks Bugs for Deepfake Detectors? Rethinking Proactive Forensics [14.596038695008403]
We argue that current watermarking models, originally devised for genuine images, may harm the deployed Deepfake detectors when directly applied to forged images.
We propose AdvMark, on behalf of proactive forensics, to exploit the adversarial vulnerability of passive detectors for good.
arXiv Detail & Related papers (2024-04-27T11:20:49Z) - Latent Watermark: Inject and Detect Watermarks in Latent Diffusion Space [7.082806239644562]
Existing methods face the dilemma of image quality and watermark robustness.
Watermarks with superior image quality usually have inferior robustness against attacks such as blurring and JPEG compression.
We propose Latent Watermark, which injects and detects watermarks in the latent diffusion space.
arXiv Detail & Related papers (2024-03-30T03:19:50Z) - Robust Identity Perceptual Watermark Against Deepfake Face Swapping [8.276177968730549]
Deepfake face swapping has caused critical privacy issues with the rapid development of deep generative models.
We propose the first robust identity perceptual watermarking framework that concurrently performs detection and source tracing against Deepfake face swapping.
arXiv Detail & Related papers (2023-11-02T16:04:32Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - An Unforgeable Publicly Verifiable Watermark for Large Language Models [84.2805275589553]
Current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting during public detection.
We propose an unforgeable publicly verifiable watermark algorithm named UPV that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages.
arXiv Detail & Related papers (2023-07-30T13:43:27Z) - On the Reliability of Watermarks for Large Language Models [95.87476978352659]
We study the robustness of watermarked text after it is re-written by humans, paraphrased by a non-watermarked LLM, or mixed into a longer hand-written document.
We find that watermarks remain detectable even after human and machine paraphrasing.
We also consider a range of new detection schemes that are sensitive to short spans of watermarked text embedded inside a large document.
arXiv Detail & Related papers (2023-06-07T17:58:48Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.