Robust SleepNets
- URL: http://arxiv.org/abs/2102.12555v1
- Date: Wed, 24 Feb 2021 20:48:13 GMT
- Title: Robust SleepNets
- Authors: Yigit Alparslan and Edward Kim
- Abstract summary: In this study, we investigate eye closedness detection to prevent vehicle accidents related to driver disengagements and driver drowsiness.
We develop two models to detect eye closedness: first model on eye images and a second model on face images.
We adversarially attack the models with Projected Gradient Descent, Fast Gradient Sign and DeepFool methods and report adversarial success rate.
- Score: 7.23389716633927
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: State-of-the-art convolutional neural networks excel in machine learning
tasks such as face recognition, and object classification but suffer
significantly when adversarial attacks are present. It is crucial that machine
critical systems, where machine learning models are deployed, utilize robust
models to handle a wide range of variability in the real world and malicious
actors that may use adversarial attacks. In this study, we investigate eye
closedness detection to prevent vehicle accidents related to driver
disengagements and driver drowsiness. Specifically, we focus on adversarial
attacks in this application domain, but emphasize that the methodology can be
applied to many other domains. We develop two models to detect eye closedness:
first model on eye images and a second model on face images. We adversarially
attack the models with Projected Gradient Descent, Fast Gradient Sign and
DeepFool methods and report adversarial success rate. We also study the effect
of training data augmentation. Finally, we adversarially train the same models
on perturbed images and report the success rate for the defense against these
attacks. We hope our study sets up the work to prevent potential vehicle
accidents by capturing drivers' face images and alerting them in case driver's
eyes are closed due to drowsiness.
Related papers
- Understanding Deep Learning defenses Against Adversarial Examples
Through Visualizations for Dynamic Risk Assessment [0.0]
Adversarial training, dimensionality reduction and prediction similarity were selected as defenses against adversarial example attack.
In each defense, the behavior of the original model has been compared with the behavior of the defended model, representing the target model by a graph in a visualization.
arXiv Detail & Related papers (2024-02-12T09:05:01Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Btech thesis report on adversarial attack detection and purification of
adverserially attacked images [0.0]
This thesis report is on detection and purification of adverserially attacked images.
A deep learning model is trained on certain training examples for various tasks such as classification, regression etc.
arXiv Detail & Related papers (2022-05-09T09:24:11Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Automating Defense Against Adversarial Attacks: Discovery of
Vulnerabilities and Application of Multi-INT Imagery to Protect Deployed
Models [0.0]
We evaluate the use of multi-spectral image arrays and ensemble learners to combat adversarial attacks.
In rough analogy to defending cyber-networks, we combine techniques from both offensive ("red team) and defensive ("blue team") approaches.
arXiv Detail & Related papers (2021-03-29T19:07:55Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - An Empirical Review of Adversarial Defenses [0.913755431537592]
Deep neural networks, which form the basis of such systems, are highly susceptible to a specific type of attack, called adversarial attacks.
A hacker can, even with bare minimum computation, generate adversarial examples (images or data points that belong to another class, but consistently fool the model to get misclassified as genuine) and crumble the basis of such algorithms.
We show two effective techniques, namely Dropout and Denoising Autoencoders, and show their success in preventing such attacks from fooling the model.
arXiv Detail & Related papers (2020-12-10T09:34:41Z) - Online Alternate Generator against Adversarial Attacks [144.45529828523408]
Deep learning models are notoriously sensitive to adversarial examples which are synthesized by adding quasi-perceptible noises on real images.
We propose a portable defense method, online alternate generator, which does not need to access or modify the parameters of the target networks.
The proposed method works by online synthesizing another image from scratch for an input image, instead of removing or destroying adversarial noises.
arXiv Detail & Related papers (2020-09-17T07:11:16Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.