Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack
- URL: http://arxiv.org/abs/2401.09673v3
- Date: Fri, 5 Jul 2024 14:51:55 GMT
- Title: Artwork Protection Against Neural Style Transfer Using Locally Adaptive Adversarial Color Attack
- Authors: Zhongliang Guo, Junhao Dong, Yifei Qian, Kaixuan Wang, Weiye Li, Ziheng Guo, Yuheng Wang, Yanli Li, Ognjen Arandjelović, Lei Fang,
- Abstract summary: Neural style transfer (NST) generates new images by combining the style of one image with the content of another.
We propose Locally Adaptive Adversarial Color Attack (LAACA), empowering artists to protect their artwork from unauthorized style transfer.
- Score: 9.072011414658512
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural style transfer (NST) generates new images by combining the style of one image with the content of another. However, unauthorized NST can exploit artwork, raising concerns about artists' rights and motivating the development of proactive protection methods. We propose Locally Adaptive Adversarial Color Attack (LAACA), empowering artists to protect their artwork from unauthorized style transfer by processing before public release. By delving into the intricacies of human visual perception and the role of different frequency components, our method strategically introduces frequency-adaptive perturbations in the image. These perturbations significantly degrade the generation quality of NST while maintaining an acceptable level of visual change in the original image, ensuring that potential infringers are discouraged from using the protected artworks, because of its bad NST generation quality. Additionally, existing metrics often overlook the importance of color fidelity in evaluating color-mattered tasks, such as the quality of NST-generated images, which is crucial in the context of artistic works. To comprehensively assess the color-mattered tasks, we propose the Adversarial Color Distance Metric (ACDM), designed to quantify the color difference of images pre- and post-manipulations. Experimental results confirm that attacking NST using LAACA results in visually inferior style transfer, and the ACDM can efficiently measure color-mattered tasks. By providing artists with a tool to safeguard their intellectual property, our work relieves the socio-technical challenges posed by the misuse of NST in the art community.
Related papers
- SITA: Structurally Imperceptible and Transferable Adversarial Attacks for Stylized Image Generation [34.228338508482494]
Current methods aimed at safeguarding artworks often employ adversarial attacks.
We propose a Structurally Imperceptible and Transferable Adrial (SITA) attacks.
It significantly outperforms existing methods in terms of transferability, computational efficiency, and noise imperceptibility.
arXiv Detail & Related papers (2025-03-25T15:55:25Z) - Free-Lunch Color-Texture Disentanglement for Stylized Image Generation [58.406368812760256]
This paper introduces the first tuning-free approach to achieve free-lunch color-texture disentanglement in stylized T2I generation.
We develop techniques for separating and extracting Color-Texture Embeddings (CTE) from individual color and texture reference images.
To ensure that the color palette of the generated image aligns closely with the color reference, we apply a whitening and coloring transformation.
arXiv Detail & Related papers (2025-03-18T14:10:43Z) - AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption [25.06674328160838]
Malicious adversaries exploit diffusion models for inpainting tasks, such as replacing a specific region with a celebrity.
We propose ADVPAINT, a novel framework that generates adversarial perturbations that effectively disrupt the adversary's inpainting tasks.
Our experimental results demonstrate that ADVPAINT's perturbations are highly effective in disrupting the adversary's inpainting tasks, outperforming existing methods.
arXiv Detail & Related papers (2025-03-13T06:05:40Z) - Diffusing Colors: Image Colorization with Text Guided Diffusion [11.727899027933466]
We present a novel image colorization framework that utilizes image diffusion techniques with granular text prompts.
Our method provides a balance between automation and control, outperforming existing techniques in terms of visual quality and semantic coherence.
Our approach holds potential particularly for color enhancement and historical image colorization.
arXiv Detail & Related papers (2023-12-07T08:59:20Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks [68.48271396073156]
We propose a novel Natural Color Fool (NCF) to boost transferability of adversarial examples without damaging image quality.
Results show that our NCF can outperform state-of-the-art approaches by 15.0%$sim$32.9% for fooling normally trained models and 10.0%$sim$25.3% for evading defense methods.
arXiv Detail & Related papers (2022-10-05T06:24:16Z) - CCPL: Contrastive Coherence Preserving Loss for Versatile Style Transfer [58.020470877242865]
We devise a universally versatile style transfer method capable of performing artistic, photo-realistic, and video style transfer jointly.
We make a mild and reasonable assumption that global inconsistency is dominated by local inconsistencies and devise a generic Contrastive Coherence Preserving Loss (CCPL) applied to local patches.
CCPL can preserve the coherence of the content source during style transfer without degrading stylization.
arXiv Detail & Related papers (2022-07-11T12:09:41Z) - Interactive Style Transfer: All is Your Palette [74.06681967115594]
We propose a drawing-like interactive style transfer (IST) method, by which users can interactively create a harmonious-style image.
Our IST method can serve as a brush, dip style from anywhere, and then paint to any region of the target content image.
arXiv Detail & Related papers (2022-03-25T06:38:46Z) - Deep Saliency Prior for Reducing Visual Distraction [12.28561668097479]
We produce a range of powerful editing effects for reducing distraction in images.
The resulting effects are consistent with cognitive research on the human visual system.
We present results on a variety of natural images and conduct a perceptual study to evaluate and validate the changes in viewers' eye-gaze between the original images and our edited results.
arXiv Detail & Related papers (2021-09-05T03:19:21Z) - ReGO: Reference-Guided Outpainting for Scenery Image [82.21559299694555]
generative adversarial learning has advanced the image outpainting by producing semantic consistent content for the given image.
This work investigates a principle way to synthesize texture-rich results by borrowing pixels from its neighbors.
To prevent the style of the generated part from being affected by the reference images, a style ranking loss is proposed to augment the ReGO to synthesize style-consistent results.
arXiv Detail & Related papers (2021-06-20T02:34:55Z) - Essential Features: Reducing the Attack Surface of Adversarial
Perturbations with Robust Content-Aware Image Preprocessing [5.831840281853604]
Adversaries can fool machine learning models into making incorrect predictions by adding perturbations to an image.
One approach to defending against such perturbations is to apply image preprocessing functions to remove the effects of the perturbation.
We propose a novel image preprocessing technique called Essential Features that transforms the image into a robust feature space.
arXiv Detail & Related papers (2020-12-03T04:40:51Z) - Adversarial Image Color Transformations in Explicit Color Filter Space [5.682107851677069]
Adversarial Color Filter (AdvCF) is a novel color transformation attack that is optimized with gradient information in the parameter space of a simple color filter.
We show that AdvCF is superior over the state-of-the-art human-interpretable color transformation attack on both image acceptability and efficiency.
arXiv Detail & Related papers (2020-11-12T23:51:37Z) - Towards Feature Space Adversarial Attack [18.874224858723494]
We propose a new adversarial attack to Deep Neural Networks for image classification.
Our attack focuses on perturbing abstract features, more specifically, features that denote styles.
We show that our attack can generate adversarial samples that are more natural-looking than the state-of-the-art attacks.
arXiv Detail & Related papers (2020-04-26T13:56:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.