Black-box Adversaries from Latent Space: Unnoticeable Attacks on Human Pose and Shape Estimation
- URL: http://arxiv.org/abs/2505.12009v1
- Date: Sat, 17 May 2025 14:02:02 GMT
- Title: Black-box Adversaries from Latent Space: Unnoticeable Attacks on Human Pose and Shape Estimation
- Authors: Zhiying Li, Guanggang Geng, Yeying Jin, Zhizhi Guo, Bruce Gu, Jidong Huo, Zhaoxin Fan, Wenjun Wu,
- Abstract summary: We propose a novel Unnoticeable Black-Box Attack (UBA) against EHPS models.<n>UBA exploits latent-space representations of natural images to generate an optimal adversarial noise pattern.<n>UBA increases the pose estimation errors of EHPS models by 17.27%-58.21% on average, revealing critical vulnerabilities.
- Score: 9.436103046529764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Expressive human pose and shape (EHPS) estimation is vital for digital human generation, particularly in live-streaming applications. However, most existing EHPS models focus primarily on minimizing estimation errors, with limited attention on potential security vulnerabilities. Current adversarial attacks on EHPS models often require white-box access (e.g., model details or gradients) or generate visually conspicuous perturbations, limiting their practicality and ability to expose real-world security threats. To address these limitations, we propose a novel Unnoticeable Black-Box Attack (UBA) against EHPS models. UBA leverages the latent-space representations of natural images to generate an optimal adversarial noise pattern and iteratively refine its attack potency along an optimized direction in digital space. Crucially, this process relies solely on querying the model's output, requiring no internal knowledge of the EHPS architecture, while guiding the noise optimization toward greater stealth and effectiveness. Extensive experiments and visual analyses demonstrate the superiority of UBA. Notably, UBA increases the pose estimation errors of EHPS models by 17.27%-58.21% on average, revealing critical vulnerabilities. These findings underscore the urgent need to address and mitigate security risks associated with digital human generation systems.
Related papers
- An h-space Based Adversarial Attack for Protection Against Few-shot Personalization [5.357486699062561]
We propose a novel anti-customization approach, called HAAD, that leverages adversarial attacks to craft perturbations based on the h-space.<n>We introduce a more efficient variant, HAAD-KV, that constructs perturbations solely based on the KV parameters of the h-space.<n>Despite their simplicity, our methods outperform state-of-the-art adversarial attacks, highlighting their effectiveness.
arXiv Detail & Related papers (2025-07-23T14:43:22Z) - Unveiling Hidden Vulnerabilities in Digital Human Generation via Adversarial Attacks [14.356235723912564]
We propose a novel framework designed to generate adversarial examples capable of effectively compromising any digital human generation model.<n>Our approach introduces a textbf Dual Heterogeneous Noise Generator (DHNG), which leverages Variational Autoencoders (VAE) and ControlNet to produce diverse, targeted noise tailored to the original image features.<n>Extensive experiments demonstrate TBA's superiority, achieving a remarkable 41.0% increase in estimation error, with an average improvement of approximately 17.0%.
arXiv Detail & Related papers (2025-04-24T11:42:10Z) - Backdoor Defense in Diffusion Models via Spatial Attention Unlearning [0.0]
Text-to-image diffusion models are increasingly vulnerable to backdoor attacks.<n>We propose Spatial Attention Unlearning (SAU), a novel technique for mitigating backdoor attacks in diffusion models.
arXiv Detail & Related papers (2025-04-21T04:00:19Z) - SAP-DIFF: Semantic Adversarial Patch Generation for Black-Box Face Recognition Models via Diffusion Models [4.970240615354004]
Impersonation attacks are a significant threat because adversarial perturbations allow attackers to disguise themselves as legitimate users.<n>We propose a novel method to generate adversarial patches via semantic perturbations in the latent space rather than direct pixel manipulation.<n>Our method achieves an average attack success rate improvement of 45.66%, and a reduction in the number of queries by about 40%.
arXiv Detail & Related papers (2025-02-27T02:57:29Z) - Perturb, Attend, Detect and Localize (PADL): Robust Proactive Image Defense [5.150608040339816]
We introduce PADL, a new solution able to generate image-specific perturbations using a symmetric scheme of encoding and decoding based on cross-attention.
Our method generalizes to a range of unseen models with diverse architectural designs, such as StarGANv2, BlendGAN, DiffAE, StableDiffusion and StableDiffusionXL.
arXiv Detail & Related papers (2024-09-26T15:16:32Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - STBA: Towards Evaluating the Robustness of DNNs for Query-Limited Black-box Scenario [50.37501379058119]
We propose the Spatial Transform Black-box Attack (STBA) to craft formidable adversarial examples in the query-limited scenario.
We show that STBA could effectively improve the imperceptibility of the adversarial examples and remarkably boost the attack success rate under query-limited settings.
arXiv Detail & Related papers (2024-03-30T13:28:53Z) - Data Forensics in Diffusion Models: A Systematic Analysis of Membership
Privacy [62.16582309504159]
We develop a systematic analysis of membership inference attacks on diffusion models and propose novel attack methods tailored to each attack scenario.
Our approach exploits easily obtainable quantities and is highly effective, achieving near-perfect attack performance (>0.9 AUCROC) in realistic scenarios.
arXiv Detail & Related papers (2023-02-15T17:37:49Z) - Ada3Diff: Defending against 3D Adversarial Point Clouds via Adaptive
Diffusion [70.60038549155485]
Deep 3D point cloud models are sensitive to adversarial attacks, which poses threats to safety-critical applications such as autonomous driving.
This paper introduces a novel distortion-aware defense framework that can rebuild the pristine data distribution with a tailored intensity estimator and a diffusion model.
arXiv Detail & Related papers (2022-11-29T14:32:43Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - EmergencyNet: Efficient Aerial Image Classification for Drone-Based
Emergency Monitoring Using Atrous Convolutional Feature Fusion [8.634988828030245]
This article focuses on the efficient aerial image classification from on-board a UAV for emergency response/monitoring applications.
A dedicated Aerial Image Database for Emergency Response applications is introduced and a comparative analysis of existing approaches is performed.
A lightweight convolutional neural network architecture is proposed, referred to as EmergencyNet, based on atrous convolutions to process multiresolution features.
arXiv Detail & Related papers (2021-04-28T20:24:10Z) - Towards Adversarial Patch Analysis and Certified Defense against Crowd
Counting [61.99564267735242]
Crowd counting has drawn much attention due to its importance in safety-critical surveillance systems.
Recent studies have demonstrated that deep neural network (DNN) methods are vulnerable to adversarial attacks.
We propose a robust attack strategy called Adversarial Patch Attack with Momentum to evaluate the robustness of crowd counting models.
arXiv Detail & Related papers (2021-04-22T05:10:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.