Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling
- URL: http://arxiv.org/abs/2307.01778v2
- Date: Fri, 08 Nov 2024 01:17:17 GMT
- Title: Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling
- Authors: Zhanhao Hu, Wenda Chu, Xiaopei Zhu, Hui Zhang, Bo Zhang, Xiaolin Hu,
- Abstract summary: We craft adversarial texture for clothes based on 3D modeling.
We propose adversarial camouflage textures (AdvCaT) that resemble one kind of the typical textures of daily clothes.
We printed the developed 3D texture pieces on fabric materials and tailored them into T-shirts and trousers.
- Score: 19.575338491567813
- License:
- Abstract: Recent works have proposed to craft adversarial clothes for evading person detectors, while they are either only effective at limited viewing angles or very conspicuous to humans. We aim to craft adversarial texture for clothes based on 3D modeling, an idea that has been used to craft rigid adversarial objects such as a 3D-printed turtle. Unlike rigid objects, humans and clothes are non-rigid, leading to difficulties in physical realization. In order to craft natural-looking adversarial clothes that can evade person detectors at multiple viewing angles, we propose adversarial camouflage textures (AdvCaT) that resemble one kind of the typical textures of daily clothes, camouflage textures. We leverage the Voronoi diagram and Gumbel-softmax trick to parameterize the camouflage textures and optimize the parameters via 3D modeling. Moreover, we propose an efficient augmentation pipeline on 3D meshes combining topologically plausible projection (TopoProj) and Thin Plate Spline (TPS) to narrow the gap between digital and real-world objects. We printed the developed 3D texture pieces on fabric materials and tailored them into T-shirts and trousers. Experiments show high attack success rates of these clothes against multiple detectors.
Related papers
- Synthesizing Moving People with 3D Control [81.92710208308684]
We present a diffusion model-based framework for animating people from a single image for a given target 3D motion sequence.
For the first part, we learn an in-filling diffusion model to hallucinate unseen parts of a person given a single image.
Second, we develop a diffusion-based rendering pipeline, which is controlled by 3D human poses.
arXiv Detail & Related papers (2024-01-19T18:59:11Z) - SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes [62.82552328188602]
We present SCULPT, a novel 3D generative model for clothed and textured 3D meshes of humans.
We devise a deep neural network that learns to represent the geometry and appearance distribution of clothed human bodies.
arXiv Detail & Related papers (2023-08-21T11:23:25Z) - PERGAMO: Personalized 3D Garments from Monocular Video [6.8338761008826445]
PERGAMO is a data-driven approach to learn a deformable model for 3D garments from monocular images.
We first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos.
We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
arXiv Detail & Related papers (2022-10-26T21:15:54Z) - Adversarial Texture for Fooling Person Detectors in the Physical World [38.39939625606267]
Adrial Texture (AdvTexture) can cover clothes with arbitrary shapes so that people wearing such clothes can hide from person detectors from different viewing angles.
We propose a generative method, named Toroidal-Cropping-based Expandable Generative Attack (TC-EGA) to craft AdvTexture with repetitive structures.
Experiments showed that these clothes could fool person detectors in the physical world.
arXiv Detail & Related papers (2022-03-07T13:22:25Z) - gDNA: Towards Generative Detailed Neural Avatars [94.9804106939663]
We show that our model is able to generate natural human avatars wearing diverse and detailed clothing.
Our method can be used on the task of fitting human models to raw scans, outperforming the previous state-of-the-art.
arXiv Detail & Related papers (2022-01-11T18:46:38Z) - The Power of Points for Modeling Humans in Clothing [60.00557674969284]
Currently it requires an artist to create 3D human avatars with realistic clothing that can move naturally.
We show that a 3D representation can capture varied topology at high resolution and that can be learned from data.
We train a neural network with a novel local clothing geometric feature to represent the shape of different outfits.
arXiv Detail & Related papers (2021-09-02T17:58:45Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - 3D Invisible Cloak [12.48087784777591]
We propose a novel physical stealth attack against the person detectors in real world.
The proposed method generates an adversarial patch, and prints it on real clothes to make a 3D invisible cloak.
arXiv Detail & Related papers (2020-11-27T12:43:04Z) - Learning to Transfer Texture from Clothing Images to 3D Humans [50.838970996234465]
We present a method to automatically transfer textures of clothing images to 3D garments worn on top SMPL, in real time.
We first compute training pairs of images with aligned 3D garments using a custom non-rigid 3D to 2D registration method, which is accurate but slow.
Our model opens the door to applications such as virtual try-on, and allows for generation of 3D humans with varied textures which is necessary for learning.
arXiv Detail & Related papers (2020-03-04T12:53:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.