Work In Progress: Safety and Robustness Verification of
Autoencoder-Based Regression Models using the NNV Tool
- URL: http://arxiv.org/abs/2207.06759v1
- Date: Thu, 14 Jul 2022 09:10:30 GMT
- Title: Work In Progress: Safety and Robustness Verification of
Autoencoder-Based Regression Models using the NNV Tool
- Authors: Neelanjana Pal (Department of Electrical and Computer Engineering
Vanderbilt University, USA), Taylor T Johnson (Department of Electrical and
Computer Engineering Vanderbilt University, USA)
- Abstract summary: This work introduces robustness verification for autoencoder-based regression neural network (NN) models.
We introduce two definitions of robustness evaluation metrics for autoencoder-based regression models.
As per the authors' understanding, this work in progress paper is the first to show possible reachability analysis of autoencoder-based NNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work in progress paper introduces robustness verification for
autoencoder-based regression neural network (NN) models, following
state-of-the-art approaches for robustness verification of image classification
NNs. Despite the ongoing progress in developing verification methods for safety
and robustness in various deep neural networks (DNNs), robustness checking of
autoencoder models has not yet been considered. We explore this open space of
research and check ways to bridge the gap between existing DNN verification
methods by extending existing robustness analysis methods for such autoencoder
networks. While classification models using autoencoders work more or less
similar to image classification NNs, the functionality of regression models is
distinctly different. We introduce two definitions of robustness evaluation
metrics for autoencoder-based regression models, specifically the percentage
robustness and un-robustness grade. We also modified the existing Imagestar
approach, adjusting the variables to take care of the specific input types for
regression networks. The approach is implemented as an extension of NNV, then
applied and evaluated on a dataset, with a case study experiment shown using
the same dataset. As per the authors' understanding, this work in progress
paper is the first to show possible reachability analysis of autoencoder-based
NNs.
Related papers
- AI-Aided Kalman Filters [65.35350122917914]
The Kalman filter (KF) and its variants are among the most celebrated algorithms in signal processing.
Recent developments illustrate the possibility of fusing deep neural networks (DNNs) with classic Kalman-type filtering.
This article provides a tutorial-style overview of design approaches for incorporating AI in aiding KF-type algorithms.
arXiv Detail & Related papers (2024-10-16T06:47:53Z) - Exploring Cross-model Neuronal Correlations in the Context of Predicting Model Performance and Generalizability [2.6708879445664584]
This paper introduces a novel approach for assessing a newly trained model's performance based on another known model.
The proposed method evaluates correlations by determining if, for each neuron in one network, there exists a neuron in the other network that produces similar output.
arXiv Detail & Related papers (2024-08-15T22:57:39Z) - Deep Graph Reprogramming [112.34663053130073]
"Deep graph reprogramming" is a model reusing task tailored for graph neural networks (GNNs)
We propose an innovative Data Reprogramming paradigm alongside a Model Reprogramming paradigm.
arXiv Detail & Related papers (2023-04-28T02:04:29Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Adversarial Learning Networks: Source-free Unsupervised Domain
Incremental Learning [0.0]
In a non-stationary environment, updating a DNN model requires parameter re-training or model fine-tuning.
We propose an unsupervised source-free method to update DNN classification models.
Unlike existing methods, our approach can update a DNN model incrementally for non-stationary source and target tasks without storing past training data.
arXiv Detail & Related papers (2023-01-28T02:16:13Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - MEMO: Test Time Robustness via Adaptation and Augmentation [131.28104376280197]
We study the problem of test time robustification, i.e., using the test input to improve model robustness.
Recent prior works have proposed methods for test time adaptation, however, they each introduce additional assumptions.
We propose a simple approach that can be used in any test setting where the model is probabilistic and adaptable.
arXiv Detail & Related papers (2021-10-18T17:55:11Z) - Auditory Attention Decoding from EEG using Convolutional Recurrent
Neural Network [20.37214453938965]
The auditory attention decoding (AAD) approach was proposed to determine the identity of the attended talker in a multi-talker scenario.
Recent models based on deep neural networks (DNN) have been proposed to solve this problem.
In this paper, we proposed novel convolutional recurrent neural network (CRNN) based regression model and classification model.
arXiv Detail & Related papers (2021-03-03T05:09:40Z) - Improving Video Instance Segmentation by Light-weight Temporal
Uncertainty Estimates [11.580916951856256]
We present a time-dynamic approach to model uncertainties of instance segmentation networks.
We apply this approach to the detection of false positives and the estimation of prediction quality.
The proposed method only requires a readily trained neural network and video sequence input.
arXiv Detail & Related papers (2020-12-14T13:39:05Z) - Autoencoding Variational Autoencoder [56.05008520271406]
We study the implications of this behaviour on the learned representations and also the consequences of fixing it by introducing a notion of self consistency.
We show that encoders trained with our self-consistency approach lead to representations that are robust (insensitive) to perturbations in the input introduced by adversarial attacks.
arXiv Detail & Related papers (2020-12-07T14:16:14Z) - Semi-supervised Grasp Detection by Representation Learning in a Vector
Quantized Latent Space [1.3048920509133808]
In this paper, a semi-supervised learning based grasp detection approach has been presented.
To the best of our knowledge, this is the first time a Variational AutoEncoder (VAE) has been applied in the domain of robotic grasp detection.
The model performs significantly better than the existing approaches which do not make use of unlabelled images to improve the grasp.
arXiv Detail & Related papers (2020-01-23T12:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.