Investigating Weight-Perturbed Deep Neural Networks With Application in
Iris Presentation Attack Detection
- URL: http://arxiv.org/abs/2311.12764v2
- Date: Wed, 22 Nov 2023 18:52:11 GMT
- Title: Investigating Weight-Perturbed Deep Neural Networks With Application in
Iris Presentation Attack Detection
- Authors: Renu Sharma, Redwan Sony, Arun Ross
- Abstract summary: We assess the sensitivity of deep neural networks against perturbations to their weight and bias parameters.
We propose improved models simply by perturbing parameters of the network without undergoing training.
The ensemble at the parameter-level shows an average improvement of 43.58% on the LivDet-Iris-2017 dataset and 9.25% on the LivDet-Iris-2020 dataset.
- Score: 11.209470024746683
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep neural networks (DNNs) exhibit superior performance in various machine
learning tasks, e.g., image classification, speech recognition, biometric
recognition, object detection, etc. However, it is essential to analyze their
sensitivity to parameter perturbations before deploying them in real-world
applications. In this work, we assess the sensitivity of DNNs against
perturbations to their weight and bias parameters. The sensitivity analysis
involves three DNN architectures (VGG, ResNet, and DenseNet), three types of
parameter perturbations (Gaussian noise, weight zeroing, and weight scaling),
and two settings (entire network and layer-wise). We perform experiments in the
context of iris presentation attack detection and evaluate on two publicly
available datasets: LivDet-Iris-2017 and LivDet-Iris-2020. Based on the
sensitivity analysis, we propose improved models simply by perturbing
parameters of the network without undergoing training. We further combine these
perturbed models at the score-level and at the parameter-level to improve the
performance over the original model. The ensemble at the parameter-level shows
an average improvement of 43.58% on the LivDet-Iris-2017 dataset and 9.25% on
the LivDet-Iris-2020 dataset. The source code is available at
https://github.com/redwankarimsony/WeightPerturbation-MSU.
Related papers
- Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - Improved Generalization of Weight Space Networks via Augmentations [53.87011906358727]
Learning in deep weight spaces (DWS) is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs)
We empirically analyze the reasons for this overfitting and find that a key reason is the lack of diversity in DWS datasets.
To address this, we explore strategies for data augmentation in weight spaces and propose a MixUp method adapted for weight spaces.
arXiv Detail & Related papers (2024-02-06T15:34:44Z) - Fragility, Robustness and Antifragility in Deep Learning [1.53744306569115]
We propose a systematic analysis of deep neural networks (DNNs) based on a signal processing technique for network parameter removal.
Our proposed analysis investigates if the DNN performance is impacted negatively, invariantly, or positively on both clean and adversarially perturbed test datasets.
We show that our synaptic filtering method improves the test accuracy of ResNet and ShuffleNet models on adversarial datasets when only the robust and antifragile parameters are selectively retrained.
arXiv Detail & Related papers (2023-12-15T14:20:16Z) - UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM [60.575435353047304]
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM)
We propose an online framework for sensor uncertainty estimation that can be trained in a self-supervised manner from only 2D input data.
arXiv Detail & Related papers (2023-06-19T16:26:25Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - A Comprehensive Study of Image Classification Model Sensitivity to
Foregrounds, Backgrounds, and Visual Attributes [58.633364000258645]
We call this dataset RIVAL10 consisting of roughly $26k$ instances over $10$ classes.
We evaluate the sensitivity of a broad set of models to noise corruptions in foregrounds, backgrounds and attributes.
In our analysis, we consider diverse state-of-the-art architectures (ResNets, Transformers) and training procedures (CLIP, SimCLR, DeiT, Adversarial Training)
arXiv Detail & Related papers (2022-01-26T06:31:28Z) - Compact Multi-level Sparse Neural Networks with Input Independent
Dynamic Rerouting [33.35713740886292]
Sparse deep neural networks can substantially reduce the complexity and memory consumption of the models.
Facing the real-life challenges, we propose to train a sparse model that supports multiple sparse levels.
In this way, one can dynamically select the appropriate sparsity level during inference, while the storage cost is capped by the least sparse sub-model.
arXiv Detail & Related papers (2021-12-21T01:35:51Z) - RANP: Resource Aware Neuron Pruning at Initialization for 3D CNNs [32.054160078692036]
We introduce a Resource Aware Neuron Pruning (RANP) algorithm that prunes 3D CNNs to high sparsity levels.
Our algorithm leads to roughly 50%-95% reduction in FLOPs and 35%-80% reduction in memory with negligible loss in accuracy compared to the unpruned networks.
arXiv Detail & Related papers (2021-02-09T04:35:29Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - NeuralScale: Efficient Scaling of Neurons for Resource-Constrained Deep
Neural Networks [16.518667634574026]
We search for the neuron (filter) configuration of a fixed network architecture that maximizes accuracy.
We parameterize the change of the neuron (filter) number of each layer with respect to the change in parameters, allowing us to efficiently scale an architecture across arbitrary sizes.
arXiv Detail & Related papers (2020-06-23T08:14:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.