Interactive Analysis of CNN Robustness
- URL: http://arxiv.org/abs/2110.07667v1
- Date: Thu, 14 Oct 2021 18:52:39 GMT
- Title: Interactive Analysis of CNN Robustness
- Authors: Stefan Sietzen, Mathias Lechner, Judy Borowski, Ramin Hasani, Manuela
Waldner
- Abstract summary: Perturber is a web-based application that allows users to explore how CNN activations and predictions evolve when a 3D input scene is interactively perturbed.
Perturber offers a large variety of scene modifications, such as camera controls, lighting and shading effects, background modifications, object morphing, as well as adversarial attacks.
Case studies with machine learning experts have shown that Perturber helps users to quickly generate hypotheses about model vulnerabilities and to qualitatively compare model behavior.
- Score: 11.136837582678869
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While convolutional neural networks (CNNs) have found wide adoption as
state-of-the-art models for image-related tasks, their predictions are often
highly sensitive to small input perturbations, which the human vision is robust
against. This paper presents Perturber, a web-based application that allows
users to instantaneously explore how CNN activations and predictions evolve
when a 3D input scene is interactively perturbed. Perturber offers a large
variety of scene modifications, such as camera controls, lighting and shading
effects, background modifications, object morphing, as well as adversarial
attacks, to facilitate the discovery of potential vulnerabilities. Fine-tuned
model versions can be directly compared for qualitative evaluation of their
robustness. Case studies with machine learning experts have shown that
Perturber helps users to quickly generate hypotheses about model
vulnerabilities and to qualitatively compare model behavior. Using quantitative
analyses, we could replicate users' insights with other CNN architectures and
input images, yielding new insights about the vulnerability of adversarially
trained models.
Related papers
- Fooling Neural Networks for Motion Forecasting via Adversarial Attacks [0.0]
We show that motion forecasting tasks are susceptible to small perturbations and simple 3D transformations.
We conclude that similar to earlier CNN models, motion forecasting tasks are susceptible to small perturbations and simple 3D transformations.
arXiv Detail & Related papers (2024-03-07T23:44:10Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Interpretable Computer Vision Models through Adversarial Training:
Unveiling the Robustness-Interpretability Connection [0.0]
Interpretability is as essential as robustness when we deploy the models to the real world.
Standard models, compared to robust are more susceptible to adversarial attacks, and their learned representations are less meaningful to humans.
arXiv Detail & Related papers (2023-07-04T13:51:55Z) - Robust Graph Representation Learning via Predictive Coding [46.22695915912123]
Predictive coding is a message-passing framework initially developed to model information processing in the brain.
In this work, we build models that rely on the message-passing rule of predictive coding.
We show that the proposed models are comparable to standard ones in terms of performance in both inductive and transductive tasks.
arXiv Detail & Related papers (2022-12-09T03:58:22Z) - NCTV: Neural Clamping Toolkit and Visualization for Neural Network
Calibration [66.22668336495175]
A lack of consideration for neural network calibration will not gain trust from humans.
We introduce the Neural Clamping Toolkit, the first open-source framework designed to help developers employ state-of-the-art model-agnostic calibrated models.
arXiv Detail & Related papers (2022-11-29T15:03:05Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Physical world assistive signals for deep neural network classifiers --
neither defense nor attack [23.138996515998347]
We introduce the concept of Assistive Signals, which are optimized to improve a model's confidence score regardless if it's under attack or not.
Experimental evaluations show that the assistive signals generated by our optimization method increase the accuracy and confidence of deep models.
We discuss how we can exploit these insights to re-think, or avoid, some patterns that might contribute to, or degrade, the detectability of objects in the real-world.
arXiv Detail & Related papers (2021-05-03T04:02:48Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.