Benchmarking the Robustness of Spatial-Temporal Models Against
Corruptions
- URL: http://arxiv.org/abs/2110.06513v1
- Date: Wed, 13 Oct 2021 05:59:39 GMT
- Title: Benchmarking the Robustness of Spatial-Temporal Models Against
Corruptions
- Authors: Chenyu Yi, SIYUAN YANG, Haoliang Li, Yap-peng Tan, Alex Kot
- Abstract summary: We establish a corruption robustness benchmark, Mini Kinetics-C and Mini SSV2-C, which considers temporal corruptions beyond spatial corruptions in images.
We make the first attempt to conduct an exhaustive study on the corruption robustness of established CNN-based and Transformer-based spatial-temporal models.
- Score: 32.821121530785504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The state-of-the-art deep neural networks are vulnerable to common
corruptions (e.g., input data degradations, distortions, and disturbances
caused by weather changes, system error, and processing). While much progress
has been made in analyzing and improving the robustness of models in image
understanding, the robustness in video understanding is largely unexplored. In
this paper, we establish a corruption robustness benchmark, Mini Kinetics-C and
Mini SSV2-C, which considers temporal corruptions beyond spatial corruptions in
images. We make the first attempt to conduct an exhaustive study on the
corruption robustness of established CNN-based and Transformer-based
spatial-temporal models. The study provides some guidance on robust model
design and training: Transformer-based model performs better than CNN-based
models on corruption robustness; the generalization ability of spatial-temporal
models implies robustness against temporal corruptions; model corruption
robustness (especially robustness in the temporal domain) enhances with
computational cost and model capacity, which may contradict the current trend
of improving the computational efficiency of models. Moreover, we find the
robustness intervention for image-related tasks (e.g., training models with
noise) may not work for spatial-temporal models.
Related papers
- Towards Evaluating the Robustness of Visual State Space Models [63.14954591606638]
Vision State Space Models (VSSMs) have demonstrated remarkable performance in visual perception tasks.
However, their robustness under natural and adversarial perturbations remains a critical concern.
We present a comprehensive evaluation of VSSMs' robustness under various perturbation scenarios.
arXiv Detail & Related papers (2024-06-13T17:59:44Z) - Adversarial Fine-tuning of Compressed Neural Networks for Joint Improvement of Robustness and Efficiency [3.3490724063380215]
Adrial training has been presented as a mitigation strategy which can result in more robust models.
We explore the effects of two different model compression methods -- structured weight pruning and quantization -- on adversarial robustness.
We show that adversarial fine-tuning of compressed models can achieve robustness performance comparable to adversarially trained models.
arXiv Detail & Related papers (2024-03-14T14:34:25Z) - Learning Robust Precipitation Forecaster by Temporal Frame Interpolation [65.5045412005064]
We develop a robust precipitation forecasting model that demonstrates resilience against spatial-temporal discrepancies.
Our approach has led to significant improvements in forecasting precision, culminating in our model securing textit1st place in the transfer learning leaderboard of the textitWeather4cast'23 competition.
arXiv Detail & Related papers (2023-11-30T08:22:08Z) - Towards a robust and reliable deep learning approach for detection of
compact binary mergers in gravitational wave data [0.0]
We develop a deep learning model stage-wise and work towards improving its robustness and reliability.
We retrain the model in a novel framework involving a generative adversarial network (GAN)
Although absolute robustness is practically impossible to achieve, we demonstrate some fundamental improvements earned through such training.
arXiv Detail & Related papers (2023-06-20T18:00:05Z) - A Survey on the Robustness of Computer Vision Models against Common
Corruptions [3.9858496473361402]
We present a comprehensive overview of methods that improve the robustness of computer vision models against common corruptions.
We release a unified benchmark framework to compare robustness performance on several datasets.
arXiv Detail & Related papers (2023-05-10T10:19:31Z) - Robo3D: Towards Robust and Reliable 3D Perception against Corruptions [58.306694836881235]
We present Robo3D, the first comprehensive benchmark heading toward probing the robustness of 3D detectors and segmentors under out-of-distribution scenarios.
We consider eight corruption types stemming from severe weather conditions, external disturbances, and internal sensor failure.
We propose a density-insensitive training framework along with a simple flexible voxelization strategy to enhance the model resiliency.
arXiv Detail & Related papers (2023-03-30T17:59:17Z) - Benchmarking Robustness in Neural Radiance Fields [22.631924719238963]
We analyze the robustness of NeRF-based novel view synthesis algorithms in the presence of different types of corruptions.
We find that NeRF-based models are significantly degraded in the presence of corruption, and are more sensitive to a different set of corruptions than image recognition models.
arXiv Detail & Related papers (2023-01-10T17:01:12Z) - Robustness in Deep Learning for Computer Vision: Mind the gap? [13.576376492050185]
We identify, analyze, and summarize current definitions and progress towards non-adversarial robustness in deep learning for computer vision.
We find that this area of research has received disproportionately little attention relative to adversarial machine learning.
arXiv Detail & Related papers (2021-12-01T16:42:38Z) - Improving robustness against common corruptions with frequency biased
models [112.65717928060195]
unseen image corruptions can cause a surprisingly large drop in performance.
Image corruption types have different characteristics in the frequency spectrum and would benefit from a targeted type of data augmentation.
We propose a new regularization scheme that minimizes the total variation (TV) of convolution feature-maps to increase high-frequency robustness.
arXiv Detail & Related papers (2021-03-30T10:44:50Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.