epsilon-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for
Facial Expression Recognition
- URL: http://arxiv.org/abs/2403.06661v1
- Date: Mon, 11 Mar 2024 12:29:55 GMT
- Title: epsilon-Mesh Attack: A Surface-based Adversarial Point Cloud Attack for
Facial Expression Recognition
- Authors: Batuhan Cengiz, Mert Gulsen, Yusuf H. Sahin, Gozde Unal
- Abstract summary: A common method is to use adversarial attacks where the gradient direction is followed to change the input slightly.
In this paper, we suggest an adversarial attack called $epsilon$-Mesh Attack, which operates on point cloud data via limiting perturbations to be on the mesh surface.
Our method successfully confuses trained DGCNN and PointNet models $99.72%$ and $97.06%$ of the time, with indistinguishable facial deformations.
- Score: 0.8192907805418583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point clouds and meshes are widely used 3D data structures for many computer
vision applications. While the meshes represent the surfaces of an object,
point cloud represents sampled points from the surface which is also the output
of modern sensors such as LiDAR and RGB-D cameras. Due to the wide application
area of point clouds and the recent advancements in deep neural networks,
studies focusing on robust classification of the 3D point cloud data emerged.
To evaluate the robustness of deep classifier networks, a common method is to
use adversarial attacks where the gradient direction is followed to change the
input slightly. The previous studies on adversarial attacks are generally
evaluated on point clouds of daily objects. However, considering 3D faces,
these adversarial attacks tend to affect the person's facial structure more
than the desired amount and cause malformation. Specifically for facial
expressions, even a small adversarial attack can have a significant effect on
the face structure. In this paper, we suggest an adversarial attack called
$\epsilon$-Mesh Attack, which operates on point cloud data via limiting
perturbations to be on the mesh surface. We also parameterize our attack by
$\epsilon$ to scale the perturbation mesh. Our surface-based attack has tighter
perturbation bounds compared to $L_2$ and $L_\infty$ norm bounded attacks that
operate on unit-ball. Even though our method has additional constraints, our
experiments on CoMA, Bosphorus and FaceWarehouse datasets show that
$\epsilon$-Mesh Attack (Perpendicular) successfully confuses trained DGCNN and
PointNet models $99.72\%$ and $97.06\%$ of the time, with indistinguishable
facial deformations. The code is available at
https://github.com/batuceng/e-mesh-attack.
Related papers
- Hide in Thicket: Generating Imperceptible and Rational Adversarial
Perturbations on 3D Point Clouds [62.94859179323329]
Adrial attack methods based on point manipulation for 3D point cloud classification have revealed the fragility of 3D models.
We propose a novel shape-based adversarial attack method, HiT-ADV, which conducts a two-stage search for attack regions based on saliency and imperceptibility perturbation scores.
We propose that by employing benign resampling and benign rigid transformations, we can further enhance physical adversarial strength with little sacrifice to imperceptibility.
arXiv Detail & Related papers (2024-03-08T12:08:06Z) - 3DHacker: Spectrum-based Decision Boundary Generation for Hard-label 3D
Point Cloud Attack [64.83391236611409]
We propose a novel 3D attack method to generate adversarial samples solely with the knowledge of class labels.
Even in the challenging hard-label setting, 3DHacker still competitively outperforms existing 3D attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2023-08-15T03:29:31Z) - SAGA: Spectral Adversarial Geometric Attack on 3D Meshes [13.84270434088512]
A triangular mesh is one of the most popular 3D data representations.
We propose a novel framework for a geometric adversarial attack on a 3D mesh autoencoder.
arXiv Detail & Related papers (2022-11-24T19:29:04Z) - Shape-invariant 3D Adversarial Point Clouds [111.72163188681807]
Adversary and invisibility are two fundamental but conflict characters of adversarial perturbations.
Previous adversarial attacks on 3D point cloud recognition have often been criticized for their noticeable point outliers.
We propose a novel Point-Cloud Sensitivity Map to boost both the efficiency and imperceptibility of point perturbations.
arXiv Detail & Related papers (2022-03-08T12:21:35Z) - MeshUDF: Fast and Differentiable Meshing of Unsigned Distance Field
Networks [68.82901764109685]
Recent work modelling 3D open surfaces train deep neural networks to approximate Unsigned Distance Fields (UDFs)
We propose to directly mesh deep UDFs as open surfaces with an extension of marching cubes, by locally detecting surface crossings.
Our method is order of magnitude faster than meshing a dense point cloud, and more accurate than inflating open surfaces.
arXiv Detail & Related papers (2021-11-29T14:24:02Z) - Generating Unrestricted 3D Adversarial Point Clouds [9.685291478330054]
deep learning for 3D point clouds is still vulnerable to adversarial attacks.
We propose an Adversarial Graph-Convolutional Generative Adversarial Network (AdvGCGAN) to generate realistic adversarial 3D point clouds.
arXiv Detail & Related papers (2021-11-17T08:30:18Z) - 3D Adversarial Attacks Beyond Point Cloud [8.076067288723133]
Previous adversarial attacks on 3D point clouds mainly focus on add perturbation to the original point cloud.
We present a novel adversarial attack, named Mesh Attack, to address this problem.
arXiv Detail & Related papers (2021-04-25T13:01:41Z) - Geometric Adversarial Attacks and Defenses on 3D Point Clouds [25.760935151452063]
In this work, we explore adversarial examples at a geometric level.
That is, a small change to a clean source point cloud leads, after passing through an autoencoder model, to a shape from a different target class.
On the defense side, we show that remnants of the attack's target shape are still present at the reconstructed output after applying the defense to the adversarial input.
arXiv Detail & Related papers (2020-12-10T13:30:06Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - ShapeAdv: Generating Shape-Aware Adversarial 3D Point Clouds [78.25501874120489]
We develop shape-aware adversarial 3D point cloud attacks by leveraging the learned latent space of a point cloud auto-encoder.
Different from prior works, the resulting adversarial 3D point clouds reflect the shape variations in the 3D point cloud space while still being close to the original one.
arXiv Detail & Related papers (2020-05-24T00:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.