SAGA: Spectral Adversarial Geometric Attack on 3D Meshes
- URL: http://arxiv.org/abs/2211.13775v2
- Date: Mon, 25 Sep 2023 08:36:08 GMT
- Title: SAGA: Spectral Adversarial Geometric Attack on 3D Meshes
- Authors: Tomer Stolik, Itai Lang, Shai Avidan
- Abstract summary: A triangular mesh is one of the most popular 3D data representations.
We propose a novel framework for a geometric adversarial attack on a 3D mesh autoencoder.
- Score: 13.84270434088512
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A triangular mesh is one of the most popular 3D data representations. As
such, the deployment of deep neural networks for mesh processing is widely
spread and is increasingly attracting more attention. However, neural networks
are prone to adversarial attacks, where carefully crafted inputs impair the
model's functionality. The need to explore these vulnerabilities is a
fundamental factor in the future development of 3D-based applications.
Recently, mesh attacks were studied on the semantic level, where classifiers
are misled to produce wrong predictions. Nevertheless, mesh surfaces possess
complex geometric attributes beyond their semantic meaning, and their analysis
often includes the need to encode and reconstruct the geometry of the shape.
We propose a novel framework for a geometric adversarial attack on a 3D mesh
autoencoder. In this setting, an adversarial input mesh deceives the
autoencoder by forcing it to reconstruct a different geometric shape at its
output. The malicious input is produced by perturbing a clean shape in the
spectral domain. Our method leverages the spectral decomposition of the mesh
along with additional mesh-related properties to obtain visually credible
results that consider the delicacy of surface distortions. Our code is publicly
available at https://github.com/StolikTomer/SAGA.
Related papers
- Monocular 3D Object Reconstruction with GAN Inversion [122.96094885939146]
MeshInversion is a novel framework to improve the reconstruction of textured 3D meshes.
It exploits the generative prior of a 3D GAN pre-trained for 3D textured mesh synthesis.
Our framework obtains faithful 3D reconstructions with consistent geometry and texture across both observed and unobserved parts.
arXiv Detail & Related papers (2022-07-20T17:47:22Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Mesh Convolution with Continuous Filters for 3D Surface Parsing [101.25796935464648]
We propose a series of modular operations for effective geometric feature learning from 3D triangle meshes.
Our mesh convolutions exploit spherical harmonics as orthonormal bases to create continuous convolutional filters.
We further contribute a novel hierarchical neural network for perceptual parsing of 3D surfaces, named PicassoNet++.
arXiv Detail & Related papers (2021-12-03T09:16:49Z) - Generating Band-Limited Adversarial Surfaces Using Neural Networks [0.9208007322096533]
adversarial examples is the art of creating a noise that is added to an input signal of a classifying neural network.
In this technical report we suggest a neural network that generates the attacks.
arXiv Detail & Related papers (2021-11-14T19:16:05Z) - Deep Marching Tetrahedra: a Hybrid Representation for High-Resolution 3D
Shape Synthesis [90.26556260531707]
DMTet is a conditional generative model that can synthesize high-resolution 3D shapes using simple user guides such as coarse voxels.
Unlike deep 3D generative models that directly generate explicit representations such as meshes, our model can synthesize shapes with arbitrary topology.
arXiv Detail & Related papers (2021-11-08T05:29:35Z) - Topologically Consistent Multi-View Face Inference Using Volumetric
Sampling [25.001398662643986]
ToFu is a geometry inference framework that can produce topologically consistent meshes across identities and expressions.
A novel progressive mesh generation network embeds the topological structure of the face in a feature volume.
These high-quality assets are readily usable by production studios for avatar creation, animation and physically-based skin rendering.
arXiv Detail & Related papers (2021-10-06T17:55:08Z) - Geometric Adversarial Attacks and Defenses on 3D Point Clouds [25.760935151452063]
In this work, we explore adversarial examples at a geometric level.
That is, a small change to a clean source point cloud leads, after passing through an autoencoder model, to a shape from a different target class.
On the defense side, we show that remnants of the attack's target shape are still present at the reconstructed output after applying the defense to the adversarial input.
arXiv Detail & Related papers (2020-12-10T13:30:06Z) - Learning Deformable Tetrahedral Meshes for 3D Reconstruction [78.0514377738632]
3D shape representations that accommodate learning-based 3D reconstruction are an open problem in machine learning and computer graphics.
Previous work on neural 3D reconstruction demonstrated benefits, but also limitations, of point cloud, voxel, surface mesh, and implicit function representations.
We introduce Deformable Tetrahedral Meshes (DefTet) as a particular parameterization that utilizes volumetric tetrahedral meshes for the reconstruction problem.
arXiv Detail & Related papers (2020-11-03T02:57:01Z) - On the Effectiveness of Weight-Encoded Neural Implicit 3D Shapes [38.13954772608884]
A neural implicit outputs a number indicating whether the given query point in space is inside, outside, or on a surface.
Prior works have focused on _latent-encoded_ neural implicits, where a latent vector encoding of a specific shape is also fed as input.
A _weight-encoded_ neural implicit may forgo the latent vector and focus reconstruction accuracy on the details of a single shape.
arXiv Detail & Related papers (2020-09-17T23:10:19Z) - Deep Geometric Texture Synthesis [83.9404865744028]
We propose a novel framework for synthesizing geometric textures.
It learns texture statistics from local neighborhoods of a single reference 3D model.
Our network displaces mesh vertices in any direction, enabling synthesis of geometric textures.
arXiv Detail & Related papers (2020-06-30T19:36:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.