Wukong Framework for Not Safe For Work Detection in Text-to-Image systems
- URL: http://arxiv.org/abs/2508.00591v1
- Date: Fri, 01 Aug 2025 12:45:30 GMT
- Title: Wukong Framework for Not Safe For Work Detection in Text-to-Image systems
- Authors: Mingrui Liu, Sixiao Zhang, Cheng Long,
- Abstract summary: Wukong is a transformer-based NSFW detection framework.<n>It exploits intermediate outputs from early denoising steps and reuses U-Net's pre-trained cross-attention parameters.<n>Results show that Wukong significantly outperforms text-based safeguards and achieves comparable accuracy of image filters.
- Score: 25.516648802281626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-to-Image (T2I) generation is a popular AI-generated content (AIGC) technology enabling diverse and creative image synthesis. However, some outputs may contain Not Safe For Work (NSFW) content (e.g., violence), violating community guidelines. Detecting NSFW content efficiently and accurately, known as external safeguarding, is essential. Existing external safeguards fall into two types: text filters, which analyze user prompts but overlook T2I model-specific variations and are prone to adversarial attacks; and image filters, which analyze final generated images but are computationally costly and introduce latency. Diffusion models, the foundation of modern T2I systems like Stable Diffusion, generate images through iterative denoising using a U-Net architecture with ResNet and Transformer blocks. We observe that: (1) early denoising steps define the semantic layout of the image, and (2) cross-attention layers in U-Net are crucial for aligning text and image regions. Based on these insights, we propose Wukong, a transformer-based NSFW detection framework that leverages intermediate outputs from early denoising steps and reuses U-Net's pre-trained cross-attention parameters. Wukong operates within the diffusion process, enabling early detection without waiting for full image generation. We also introduce a new dataset containing prompts, seeds, and image-specific NSFW labels, and evaluate Wukong on this and two public benchmarks. Results show that Wukong significantly outperforms text-based safeguards and achieves comparable accuracy of image filters, while offering much greater efficiency.
Related papers
- T2UE: Generating Unlearnable Examples from Text Descriptions [60.111026156038264]
Unlearnable Examples (UEs) have emerged as a promising countermeasure against unauthorized model training.<n>We introduce textbfText-to-Unlearnable Example (T2UE), a novel framework that enables users to generate UEs using only text descriptions.
arXiv Detail & Related papers (2025-08-05T05:10:14Z) - Qwen-Image Technical Report [86.46471547116158]
We present Qwen-Image, an image generation foundation model that achieves significant advances in complex text rendering and precise image editing.<n>We design a comprehensive data pipeline that includes large-scale data collection, filtering, annotation, synthesis, and balancing.<n>Qwen-Image performs exceptionally well in alphabetic languages such as English, and also achieves remarkable progress on more challenging logographic languages like Chinese.
arXiv Detail & Related papers (2025-08-04T11:49:20Z) - Unpaired Image-to-Image Translation for Segmentation and Signal Unmixing [12.51033401137503]
Ui2i is a novel model for unpaired image-to-image translation.<n>It is trained on content-wise unpaired datasets to enable style transfer across domains.
arXiv Detail & Related papers (2025-05-27T05:36:50Z) - NOFT: Test-Time Noise Finetune via Information Bottleneck for Highly Correlated Asset Creation [70.96827354717459]
diffusion model has provided a strong tool for implementing text-to-image (T2I) and image-to-image (I2I) generation.<n>We propose a noise finetune NOFT module employed by Stable Diffusion to generate highly correlated and diverse images.
arXiv Detail & Related papers (2025-05-18T05:09:47Z) - NFIG: Autoregressive Image Generation with Next-Frequency Prediction [50.69346038028673]
We present textbfNext-textbfFrequency textbfImage textbfGeneration (textbfNFIG), a novel framework that decomposes the image generation process into multiple frequency-guided stages.<n>Our approach first generates low-frequency components to establish global structure with fewer tokens, then progressively adds higher-frequency details, following the natural spectral hierarchy of images.
arXiv Detail & Related papers (2025-03-10T08:59:10Z) - AEIOU: A Unified Defense Framework against NSFW Prompts in Text-to-Image Models [39.11841245506388]
Malicious users often exploit text-to-image (T2I) models to generate Not-Safe-for-Work (NSFW) images.<n>We introduce AEIOU, a framework that is Adaptable, Efficient, Interpretable, Optimizable, and Unified against NSFW prompts in T2I models.
arXiv Detail & Related papers (2024-12-24T03:17:45Z) - Frequency-Controlled Diffusion Model for Versatile Text-Guided Image-to-Image Translation [17.30877810859863]
Large-scale text-to-image (T2I) diffusion models have emerged as a powerful tool for image-to-image translation (I2I)<n>This paper proposes frequency-controlled diffusion model (FCDiffusion), an end-to-end diffusion-based framework.
arXiv Detail & Related papers (2024-07-03T11:05:19Z) - CycleNet: Rethinking Cycle Consistency in Text-Guided Diffusion for
Image Manipulation [57.836686457542385]
Diffusion models (DMs) have enabled breakthroughs in image synthesis tasks but lack an intuitive interface for consistent image-to-image (I2I) translation.
This paper introduces Cyclenet, a novel but simple method that incorporates cycle consistency into DMs to regularize image manipulation.
arXiv Detail & Related papers (2023-10-19T21:32:21Z) - BATINet: Background-Aware Text to Image Synthesis and Manipulation
Network [12.924990882126105]
We analyzed a novel Background-Aware Text2Image (BAT2I) task in which the generated content matches the input background.
We proposed a Background-Aware Text to Image synthesis and manipulation Network (BATINet), which contains two key components.
We demonstrated through qualitative and quantitative evaluations on the CUB dataset that the proposed model outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2023-08-11T03:22:33Z) - DF-GAN: A Simple and Effective Baseline for Text-to-Image Synthesis [80.54273334640285]
We propose a novel one-stage text-to-image backbone that directly synthesizes high-resolution images without entanglements between different generators.
We also propose a novel Target-Aware Discriminator composed of Matching-Aware Gradient Penalty and One-Way Output.
Compared with current state-of-the-art methods, our proposed DF-GAN is simpler but more efficient to synthesize realistic and text-matching images.
arXiv Detail & Related papers (2020-08-13T12:51:17Z) - Scene Text Synthesis for Efficient and Effective Deep Network Training [62.631176120557136]
We develop an innovative image synthesis technique that composes annotated training images by embedding foreground objects of interest into background images.
The proposed technique consists of two key components that in principle boost the usefulness of the synthesized images in deep network training.
Experiments over a number of public datasets demonstrate the effectiveness of our proposed image synthesis technique.
arXiv Detail & Related papers (2019-01-26T10:15:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.