SSH-Net: A Self-Supervised and Hybrid Network for Noisy Image Watermark Removal
- URL: http://arxiv.org/abs/2505.05088v1
- Date: Thu, 08 May 2025 09:36:49 GMT
- Title: SSH-Net: A Self-Supervised and Hybrid Network for Noisy Image Watermark Removal
- Authors: Wenyang Liu, Jianjun Gao, Kim-Hui Yap,
- Abstract summary: SSH-Net is a Self-Supervised and Hybrid Network designed for noisy image watermark removal.<n>The upper network, focused on the simpler task of noise removal, employs a lightweight CNN-based architecture.<n>The lower network, designed to handle the more complex task of simultaneously removing watermarks and noise, incorporates Transformer blocks to model long-range dependencies.
- Score: 5.777950695154725
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visible watermark removal is challenging due to its inherent complexities and the noise carried within images. Existing methods primarily rely on supervised learning approaches that require paired datasets of watermarked and watermark-free images, which are often impractical to obtain in real-world scenarios. To address this challenge, we propose SSH-Net, a Self-Supervised and Hybrid Network specifically designed for noisy image watermark removal. SSH-Net synthesizes reference watermark-free images using the watermark distribution in a self-supervised manner and adopts a dual-network design to address the task. The upper network, focused on the simpler task of noise removal, employs a lightweight CNN-based architecture, while the lower network, designed to handle the more complex task of simultaneously removing watermarks and noise, incorporates Transformer blocks to model long-range dependencies and capture intricate image features. To enhance the model's effectiveness, a shared CNN-based feature encoder is introduced before dual networks to extract common features that both networks can leverage. Our code will be available at https://github.com/wenyang001/SSH-Net.
Related papers
- Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible Watermark Removal [57.84348166457113]
We introduce a novel feature adapting framework that leverages the representation capacity of a pre-trained image inpainting model.<n>Our approach bridges the knowledge gap between image inpainting and watermark removal by fusing information of the residual background content beneath watermarks into the inpainting backbone model.<n>For relieving the dependence on high-quality watermark masks, we introduce a new training paradigm by utilizing coarse watermark masks to guide the inference process.
arXiv Detail & Related papers (2025-04-07T02:37:14Z) - Prior-guided Hierarchical Harmonization Network for Efficient Image Dehazing [50.92820394852817]
We propose a textitPrior-textitguided textitHarmonization Network (PGH$2$Net) for image dehazing.<n>PGH$2$Net is built upon the UNet-like architecture with an efficient encoder and decoder, consisting of two module types.
arXiv Detail & Related papers (2025-03-03T03:36:30Z) - A self-supervised CNN for image watermark removal [102.94929746450902]
We propose a self-supervised convolutional neural network (CNN) in image watermark removal (SWCNN)
SWCNN uses a self-supervised way to construct reference watermarked images rather than given paired training samples, according to watermark distribution.
Taking into account texture information, a mixed loss is exploited to improve visual effects of image watermark removal.
arXiv Detail & Related papers (2024-03-09T05:59:48Z) - Perceptive self-supervised learning network for noisy image watermark
removal [59.440951785128995]
We propose a perceptive self-supervised learning network for noisy image watermark removal (PSLNet)
Our proposed method is very effective in comparison with popular convolutional neural networks (CNNs) for noisy image watermark removal.
arXiv Detail & Related papers (2024-03-04T16:59:43Z) - A Compact Neural Network-based Algorithm for Robust Image Watermarking [30.727227627295548]
We propose a novel digital image watermarking solution with a compact neural network, named Invertible Watermarking Network (IWN)
Our IWN architecture is based on a single Invertible Neural Network (INN)
In order to enhance the robustness of our watermarking solution, we specifically introduce a simple but effective bit message normalization module.
arXiv Detail & Related papers (2021-12-27T03:20:45Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Robust Watermarking using Diffusion of Logo into Autoencoder Feature
Maps [10.072876983072113]
In this paper, we propose to use an end-to-end network for watermarking.
We use a convolutional neural network (CNN) to control the embedding strength based on the image content.
Different image processing attacks are simulated as a network layer to improve the robustness of the model.
arXiv Detail & Related papers (2021-05-24T05:18:33Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z) - Designing and Training of A Dual CNN for Image Denoising [117.54244339673316]
We propose a Dual denoising Network (DudeNet) to recover a clean image.
DudeNet consists of four modules: a feature extraction block, an enhancement block, a compression block, and a reconstruction block.
arXiv Detail & Related papers (2020-07-08T08:16:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.