Generating Band-Limited Adversarial Surfaces Using Neural Networks
- URL: http://arxiv.org/abs/2111.07424v1
- Date: Sun, 14 Nov 2021 19:16:05 GMT
- Title: Generating Band-Limited Adversarial Surfaces Using Neural Networks
- Authors: Roee Ben Shlomo, Yevgeniy Men, Ido Imanuel
- Abstract summary: adversarial examples is the art of creating a noise that is added to an input signal of a classifying neural network.
In this technical report we suggest a neural network that generates the attacks.
- Score: 0.9208007322096533
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating adversarial examples is the art of creating a noise that is added
to an input signal of a classifying neural network, and thus changing the
network's classification, while keeping the noise as tenuous as possible. While
the subject is well-researched in the 2D regime, it is lagging behind in the 3D
regime, i.e. attacking a classifying network that works on 3D point-clouds or
meshes and, for example, classifies the pose of people's 3D scans. As of now,
the vast majority of papers that describe adversarial attacks in this regime
work by methods of optimization. In this technical report we suggest a neural
network that generates the attacks. This network utilizes PointNet's
architecture with some alterations. While the previous articles on which we
based our work on have to optimize each shape separately, i.e. tailor an attack
from scratch for each individual input without any learning, we attempt to
create a unified model that can deduce the needed adversarial example with a
single forward run.
Related papers
- Learning Robust 3D Representation from CLIP via Dual Denoising [4.230780744307392]
We propose Dual Denoising, a novel framework for learning robust and well-generalized 3D representations from CLIP.
It combines a denoising-based proxy task with a novel feature denoising network for 3D pre-training.
Experiments show that our model can effectively improve the representation learning performance and adversarial robustness of the 3D learning network.
arXiv Detail & Related papers (2024-07-01T02:15:03Z) - Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - Occlusion Resilient 3D Human Pose Estimation [52.49366182230432]
Occlusions remain one of the key challenges in 3D body pose estimation from single-camera video sequences.
We demonstrate the effectiveness of this approach compared to state-of-the-art techniques that infer poses from single-camera sequences.
arXiv Detail & Related papers (2024-02-16T19:29:43Z) - SAGA: Spectral Adversarial Geometric Attack on 3D Meshes [13.84270434088512]
A triangular mesh is one of the most popular 3D data representations.
We propose a novel framework for a geometric adversarial attack on a 3D mesh autoencoder.
arXiv Detail & Related papers (2022-11-24T19:29:04Z) - NeuralReshaper: Single-image Human-body Retouching with Deep Neural
Networks [50.40798258968408]
We present NeuralReshaper, a novel method for semantic reshaping of human bodies in single images using deep generative networks.
Our approach follows a fit-then-reshape pipeline, which first fits a parametric 3D human model to a source human image.
To deal with the lack-of-data problem that no paired data exist, we introduce a novel self-supervised strategy to train our network.
arXiv Detail & Related papers (2022-03-20T09:02:13Z) - Random Walks for Adversarial Meshes [12.922946578413578]
This paper proposes a novel, unified, and general adversarial attack on mesh classification neural networks.
Our attack approach is black-box, i.e. it has access only to the network's predictions, but not to the network's full architecture or gradients.
arXiv Detail & Related papers (2022-02-15T14:31:17Z) - Scene Synthesis via Uncertainty-Driven Attribute Synchronization [52.31834816911887]
This paper introduces a novel neural scene synthesis approach that can capture diverse feature patterns of 3D scenes.
Our method combines the strength of both neural network-based and conventional scene synthesis approaches.
arXiv Detail & Related papers (2021-08-30T19:45:07Z) - Geometric Adversarial Attacks and Defenses on 3D Point Clouds [25.760935151452063]
In this work, we explore adversarial examples at a geometric level.
That is, a small change to a clean source point cloud leads, after passing through an autoencoder model, to a shape from a different target class.
On the defense side, we show that remnants of the attack's target shape are still present at the reconstructed output after applying the defense to the adversarial input.
arXiv Detail & Related papers (2020-12-10T13:30:06Z) - IF-Defense: 3D Adversarial Point Cloud Defense via Implicit Function
based Restoration [68.88711148515682]
Deep neural networks are vulnerable to various 3D adversarial attacks.
We propose an IF-Defense framework to directly optimize the coordinates of input points with geometry-aware and distribution-aware constraints.
Our results show that IF-Defense achieves the state-of-the-art defense performance against existing 3D adversarial attacks on PointNet, PointNet++, DGCNN, PointConv and RS-CNN.
arXiv Detail & Related papers (2020-10-11T15:36:40Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.