Removing Interference and Recovering Content Imaginatively for Visible
Watermark Removal
- URL: http://arxiv.org/abs/2312.14383v1
- Date: Fri, 22 Dec 2023 02:19:23 GMT
- Title: Removing Interference and Recovering Content Imaginatively for Visible
Watermark Removal
- Authors: Yicheng Leng, Chaowei Fang, Gen Li, Yixiang Fang, Guanbin Li
- Abstract summary: This study introduces the Removing Interference and Recovering Content Imaginatively (RIRCI) framework.
RIRCI embodies a two-stage approach: the initial phase centers on discerning and segregating the watermark component, while the subsequent phase focuses on background content restoration.
To achieve meticulous background restoration, our proposed model employs a dual-path network capable of fully exploring the intrinsic background information beneath semi-transparent watermarks.
- Score: 63.576748565274706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visible watermarks, while instrumental in protecting image copyrights,
frequently distort the underlying content, complicating tasks like scene
interpretation and image editing. Visible watermark removal aims to eliminate
the interference of watermarks and restore the background content. However,
existing methods often implement watermark component removal and background
restoration tasks within a singular branch, leading to residual watermarks in
the predictions and ignoring cases where watermarks heavily obscure the
background. To address these limitations, this study introduces the Removing
Interference and Recovering Content Imaginatively (RIRCI) framework. RIRCI
embodies a two-stage approach: the initial phase centers on discerning and
segregating the watermark component, while the subsequent phase focuses on
background content restoration. To achieve meticulous background restoration,
our proposed model employs a dual-path network capable of fully exploring the
intrinsic background information beneath semi-transparent watermarks and
peripheral contextual information from unaffected regions. Moreover, a Global
and Local Context Interaction module is built upon multi-layer perceptrons and
bidirectional feature transformation for comprehensive representation modeling
in the background restoration phase. The efficacy of our approach is
empirically validated across two large-scale datasets, and our findings reveal
a marked enhancement over existing watermark removal techniques.
Related papers
- Image Watermarks are Removable Using Controllable Regeneration from Clean Noise [26.09012436917272]
A critical attribute of watermark techniques is their robustness against various manipulations.
We introduce a watermark removal approach capable of effectively nullifying the state of the art watermarking techniques.
arXiv Detail & Related papers (2024-10-07T20:04:29Z) - Perceptive self-supervised learning network for noisy image watermark
removal [59.440951785128995]
We propose a perceptive self-supervised learning network for noisy image watermark removal (PSLNet)
Our proposed method is very effective in comparison with popular convolutional neural networks (CNNs) for noisy image watermark removal.
arXiv Detail & Related papers (2024-03-04T16:59:43Z) - Decoupling Degradation and Content Processing for Adverse Weather Image
Restoration [79.59228846484415]
Adverse weather image restoration strives to recover clear images from those affected by various weather types, such as rain, haze, and snow.
Previous techniques can handle multiple weather types within a single network, but they neglect the crucial distinction between these two processes, limiting the quality of restored images.
This work introduces a novel adverse weather image restoration method, called DDCNet, which decouples the degradation removal and content reconstruction process at the feature level based on their channel statistics.
arXiv Detail & Related papers (2023-12-08T12:26:38Z) - Robust Image Watermarking based on Cross-Attention and Invariant Domain
Learning [1.6589012298747952]
This paper explores a robust image watermarking methodology by harnessing cross-attention and invariant domain learning.
We design a watermark embedding technique utilizing a multi-head cross attention mechanism, enabling information exchange between the cover image and watermark.
Second, we advocate for learning an invariant domain representation that encapsulates both semantic and noise-invariant information concerning the watermark.
arXiv Detail & Related papers (2023-10-09T04:19:27Z) - WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning [68.00975867932331]
Existing watermark removal methods mainly rely on UNet with task-specific decoder branches.
We introduce an implicit joint learning paradigm to holistically integrate information from both branches.
The results demonstrate our approach's remarkable superiority, surpassing existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-08-20T07:56:34Z) - Visible Watermark Removal via Self-calibrated Localization and
Background Refinement [21.632823897244037]
Superimposing visible watermarks on images provides a powerful weapon to cope with the copyright issue.
Modern watermark removal methods perform watermark localization and background restoration simultaneously.
We propose a two-stage multi-task network to address the above issues.
arXiv Detail & Related papers (2021-08-08T06:43:55Z) - WDNet: Watermark-Decomposition Network for Visible Watermark Removal [61.14614115654322]
The uncertainty of the size, shape, color and transparency of watermarks set a huge barrier for image-to-image translation techniques.
We combine traditional watermarked image decomposition into a two-stage generator, called Watermark-Decomposition Network (WDNet)
The decomposition formulation enables WDNet to separate watermarks from the images rather than simply removing them.
arXiv Detail & Related papers (2020-12-14T15:07:35Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.