Implicit Bias Injection Attacks against Text-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2504.01819v1
- Date: Wed, 02 Apr 2025 15:24:12 GMT
- Title: Implicit Bias Injection Attacks against Text-to-Image Diffusion Models
- Authors: Huayang Huang, Xiangye Jin, Jiaxu Miao, Yu Wu,
- Abstract summary: biased T2I models can generate content with specific tendencies, potentially influencing people's perceptions.<n>This paper introduces a novel form of implicit bias that lacks explicit visual features but can manifest in diverse ways.<n>We propose an implicit bias injection attack framework (IBI-Attacks) against T2I diffusion models.
- Score: 17.131167390657243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The proliferation of text-to-image diffusion models (T2I DMs) has led to an increased presence of AI-generated images in daily life. However, biased T2I models can generate content with specific tendencies, potentially influencing people's perceptions. Intentional exploitation of these biases risks conveying misleading information to the public. Current research on bias primarily addresses explicit biases with recognizable visual patterns, such as skin color and gender. This paper introduces a novel form of implicit bias that lacks explicit visual features but can manifest in diverse ways across various semantic contexts. This subtle and versatile nature makes this bias challenging to detect, easy to propagate, and adaptable to a wide range of scenarios. We further propose an implicit bias injection attack framework (IBI-Attacks) against T2I diffusion models by precomputing a general bias direction in the prompt embedding space and adaptively adjusting it based on different inputs. Our attack module can be seamlessly integrated into pre-trained diffusion models in a plug-and-play manner without direct manipulation of user input or model retraining. Extensive experiments validate the effectiveness of our scheme in introducing bias through subtle and diverse modifications while preserving the original semantics. The strong concealment and transferability of our attack across various scenarios further underscore the significance of our approach. Code is available at https://github.com/Hannah1102/IBI-attacks.
Related papers
- Generalized Interpolating Discrete Diffusion [65.74168524007484]
Masked diffusion is a popular choice due to its simplicity and effectiveness.<n>We derive the theoretical backbone of a family of general interpolating discrete diffusion processes.<n>Exploiting GIDD's flexibility, we explore a hybrid approach combining masking and uniform noise.
arXiv Detail & Related papers (2025-03-06T14:30:55Z) - InvDiff: Invariant Guidance for Bias Mitigation in Diffusion Models [28.51460282167433]
diffusion models are highly data-driven and prone to inheriting imbalances and biases present in real-world data.<n>We propose a framework, InvDiff, which aims to learn invariant semantic information for diffusion guidance.<n>InvDiff effectively reduces biases while maintaining the quality of image generation.
arXiv Detail & Related papers (2024-12-11T15:47:11Z) - CosFairNet:A Parameter-Space based Approach for Bias Free Learning [1.9116784879310025]
Deep neural networks trained on biased data often inadvertently learn unintended inference rules.
We introduce a novel approach to address bias directly in the model's parameter space, preventing its propagation across layers.
We show enhanced classification accuracy and debiasing effectiveness across various synthetic and real-world datasets.
arXiv Detail & Related papers (2024-10-19T13:06:40Z) - OpenBias: Open-set Bias Detection in Text-to-Image Generative Models [108.2219657433884]
We tackle the challenge of open-set bias detection in text-to-image generative models presenting OpenBias.
OpenBias identifies and quantifies the severity of biases agnostically, without access to any precompiled set.
We study the behavior of Stable Diffusion 1.5, 2, and XL emphasizing new biases, never investigated before.
arXiv Detail & Related papers (2024-04-11T17:59:56Z) - Manipulating and Mitigating Generative Model Biases without Retraining [49.60774626839712]
We propose a dynamic and computationally efficient manipulation of T2I model biases by exploiting their rich language embedding spaces without model retraining.
We show that leveraging foundational vector algebra allows for a convenient control over language model embeddings to shift T2I model outputs.
As a by-product, this control serves as a form of precise prompt engineering to generate images which are generally implausible using regular text prompts.
arXiv Detail & Related papers (2024-04-03T07:33:30Z) - Quantifying Bias in Text-to-Image Generative Models [49.60774626839712]
Bias in text-to-image (T2I) models can propagate unfair social representations and may be used to aggressively market ideas or push controversial agendas.
Existing T2I model bias evaluation methods only focus on social biases.
We propose an evaluation methodology to quantify general biases in T2I generative models, without any preconceived notions.
arXiv Detail & Related papers (2023-12-20T14:26:54Z) - Semantic Adversarial Attacks via Diffusion Models [30.169827029761702]
Semantic adversarial attacks focus on changing semantic attributes of clean examples, such as color, context, and features.
We propose a framework to quickly generate a semantic adversarial attack by leveraging recent diffusion models.
Our approaches achieve approximately 100% attack success rate in multiple settings with the best FID as 36.61.
arXiv Detail & Related papers (2023-09-14T02:57:48Z) - Diffusion Models for Imperceptible and Transferable Adversarial Attack [23.991194050494396]
We propose a novel imperceptible and transferable attack by leveraging both the generative and discriminative power of diffusion models.
Our proposed method, DiffAttack, is the first that introduces diffusion models into the adversarial attack field.
arXiv Detail & Related papers (2023-05-14T16:02:36Z) - Your Diffusion Model is Secretly a Zero-Shot Classifier [90.40799216880342]
We show that density estimates from large-scale text-to-image diffusion models can be leveraged to perform zero-shot classification.
Our generative approach to classification attains strong results on a variety of benchmarks.
Our results are a step toward using generative over discriminative models for downstream tasks.
arXiv Detail & Related papers (2023-03-28T17:59:56Z) - Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data
via Generative Bias-transformation [31.944147533327058]
Contrastive Debiasing via Generative Bias-transformation (CDvG)
We propose a novel method, Contrastive Debiasing via Generative Bias-transformation (CDvG), which works without explicit bias labels or bias-free samples.
Our method demonstrates superior performance compared to prior approaches, especially when bias-free samples are scarce or absent.
arXiv Detail & Related papers (2021-12-02T07:16:06Z) - Improving Robustness by Augmenting Training Sentences with
Predicate-Argument Structures [62.562760228942054]
Existing approaches to improve robustness against dataset biases mostly focus on changing the training objective.
We propose to augment the input sentences in the training data with their corresponding predicate-argument structures.
We show that without targeting a specific bias, our sentence augmentation improves the robustness of transformer models against multiple biases.
arXiv Detail & Related papers (2020-10-23T16:22:05Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.