Adversarial Attacks on Video Object Segmentation with Hard Region
Discovery
- URL: http://arxiv.org/abs/2309.13857v1
- Date: Mon, 25 Sep 2023 03:52:15 GMT
- Title: Adversarial Attacks on Video Object Segmentation with Hard Region
Discovery
- Authors: Ping Li and Yu Zhang and Li Yuan and Jian Zhao and Xianghua Xu and
Xiaoqin Zhang
- Abstract summary: Video object segmentation has been applied to various computer vision tasks, such as video editing, autonomous driving, and human-robot interaction.
Deep neural networks are vulnerable to adversarial examples, which are the inputs attacked by almost human-imperceptible perturbations.
This will rise the security issues in highly-demanding tasks because small perturbations to the input video will result in potential attack risks.
- Score: 31.882369005280793
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Video object segmentation has been applied to various computer vision tasks,
such as video editing, autonomous driving, and human-robot interaction.
However, the methods based on deep neural networks are vulnerable to
adversarial examples, which are the inputs attacked by almost
human-imperceptible perturbations, and the adversary (i.e., attacker) will fool
the segmentation model to make incorrect pixel-level predictions. This will
rise the security issues in highly-demanding tasks because small perturbations
to the input video will result in potential attack risks. Though adversarial
examples have been extensively used for classification, it is rarely studied in
video object segmentation. Existing related methods in computer vision either
require prior knowledge of categories or cannot be directly applied due to the
special design for certain tasks, failing to consider the pixel-wise region
attack. Hence, this work develops an object-agnostic adversary that has
adversarial impacts on VOS by first-frame attacking via hard region discovery.
Particularly, the gradients from the segmentation model are exploited to
discover the easily confused region, in which it is difficult to identify the
pixel-wise objects from the background in a frame. This provides a hardness map
that helps to generate perturbations with a stronger adversarial power for
attacking the first frame. Empirical studies on three benchmarks indicate that
our attacker significantly degrades the performance of several state-of-the-art
video object segmentation models.
Related papers
- A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - To Make Yourself Invisible with Adversarial Semantic Contours [47.755808439588094]
Adversarial Semantic Contour (ASC) is an estimate of a Bayesian formulation of sparse attack with a deceived prior of object contour.
We show that ASC can corrupt the prediction of 9 modern detectors with different architectures.
We conclude with cautions about contour being the common weakness of object detectors with various architecture.
arXiv Detail & Related papers (2023-03-01T07:22:39Z) - Efficient Robustness Assessment via Adversarial Spatial-Temporal Focus
on Videos [0.0]
We design the novel Adversarial spatial-temporal Focus (AstFocus) attack on videos, which performs attacks on the simultaneously focused key frames and key regions.
By continuously querying, the reduced searching space composed of key frames and key regions is becoming precise.
Experiments on four mainstream video recognition models and three widely used action recognition datasets demonstrate that the proposed AstFocus attack outperforms the SOTA methods.
arXiv Detail & Related papers (2023-01-03T00:28:57Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Video Salient Object Detection via Contrastive Features and Attention
Modules [106.33219760012048]
We propose a network with attention modules to learn contrastive features for video salient object detection.
A co-attention formulation is utilized to combine the low-level and high-level features.
We show that the proposed method requires less computation, and performs favorably against the state-of-the-art approaches.
arXiv Detail & Related papers (2021-11-03T17:40:32Z) - State-of-the-art segmentation network fooled to segment a heart symbol
in chest X-Ray images [5.808118248166566]
Adrial attacks consist in maliciously changing the input data to mislead the predictions of automated decision systems.
We studied the effectiveness of adversarial attacks in targeted modification of segmentations of anatomical structures in chest X-rays.
arXiv Detail & Related papers (2021-03-31T22:20:59Z) - Red Carpet to Fight Club: Partially-supervised Domain Transfer for Face
Recognition in Violent Videos [12.534785814117065]
We introduce the WildestFaces dataset to study cross-domain recognition under a variety of adverse conditions.
We establish a rigorous evaluation protocol for this clean-to-violent recognition task, and present a detailed analysis of the proposed dataset and the methods.
arXiv Detail & Related papers (2020-09-16T09:45:33Z) - Over-the-Air Adversarial Flickering Attacks against Video Recognition
Networks [54.82488484053263]
Deep neural networks for video classification may be subjected to adversarial manipulation.
We present a manipulation scheme for fooling video classifiers by introducing a flickering temporal perturbation.
The attack was implemented on several target models and the transferability of the attack was demonstrated.
arXiv Detail & Related papers (2020-02-12T17:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.