Testing the Channels of Convolutional Neural Networks
- URL: http://arxiv.org/abs/2303.03400v1
- Date: Mon, 6 Mar 2023 09:58:39 GMT
- Title: Testing the Channels of Convolutional Neural Networks
- Authors: Kang Choi, Donghyun Son, Younghoon Kim, Jiwon Seo
- Abstract summary: We propose techniques for testing the channels of convolutional neural networks (CNNs)
We design FtGAN, an extension to GAN, that can generate test data with varying the intensity of a channel of a target CNN.
We also proposed a channel selection algorithm to find representative channels for testing.
- Score: 8.927538538637783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks have complex structures, and thus it is hard to understand
their inner workings and ensure correctness. To understand and debug
convolutional neural networks (CNNs) we propose techniques for testing the
channels of CNNs. We design FtGAN, an extension to GAN, that can generate test
data with varying the intensity (i.e., sum of the neurons) of a channel of a
target CNN. We also proposed a channel selection algorithm to find
representative channels for testing. To efficiently inspect the target CNN's
inference computations, we define unexpectedness score, which estimates how
similar the inference computation of the test data is to that of the training
data. We evaluated FtGAN with five public datasets and showed that our
techniques successfully identify defective channels in five different CNN
models.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Training Convolutional Neural Networks with the Forward-Forward
algorithm [1.74440662023704]
Forward Forward (FF) algorithm has up to now only been used in fully connected networks.
We show how the FF paradigm can be extended to CNNs.
Our FF-trained CNN, featuring a novel spatially-extended labeling technique, achieves a classification accuracy of 99.16% on the MNIST hand-written digits dataset.
arXiv Detail & Related papers (2023-12-22T18:56:35Z) - Forensic Video Steganalysis in Spatial Domain by Noise Residual
Convolutional Neural Network [0.0]
This research evaluates a convolutional neural network (CNN) based approach to forensic video steganalysis.
A video steganography dataset is created to train a CNN to conduct forensic steganalysis in the spatial domain.
We use a noise residual convolutional neural network to detect embedded secrets since a steganographic embedding process will always result in the modification of pixel values in video frames.
arXiv Detail & Related papers (2023-05-29T13:17:20Z) - Attention-based Feature Compression for CNN Inference Offloading in Edge
Computing [93.67044879636093]
This paper studies the computational offloading of CNN inference in device-edge co-inference systems.
We propose a novel autoencoder-based CNN architecture (AECNN) for effective feature extraction at end-device.
Experiments show that AECNN can compress the intermediate data by more than 256x with only about 4% accuracy loss.
arXiv Detail & Related papers (2022-11-24T18:10:01Z) - Continuous approximation by convolutional neural networks with a
sigmoidal function [0.0]
We present a class of convolutional neural networks (CNNs) called non-overlapping CNNs.
We prove that such networks with sigmoidal activation function are capable of approximating arbitrary continuous function defined on compact input sets with any desired degree of accuracy.
arXiv Detail & Related papers (2022-09-27T12:31:36Z) - What Can Be Learnt With Wide Convolutional Neural Networks? [69.55323565255631]
We study infinitely-wide deep CNNs in the kernel regime.
We prove that deep CNNs adapt to the spatial scale of the target function.
We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN.
arXiv Detail & Related papers (2022-08-01T17:19:32Z) - Lost Vibration Test Data Recovery Using Convolutional Neural Network: A
Case Study [0.0]
This paper proposes a CNN algorithm for the Alamosa Canyon Bridge as a real structure.
Three different CNN models were considered to predict one and two malfunctioned sensors.
The accuracy of the model was increased by adding a convolutional layer.
arXiv Detail & Related papers (2022-04-11T23:24:03Z) - CONet: Channel Optimization for Convolutional Neural Networks [33.58529066005248]
We study channel size optimization in convolutional neural networks (CNN)
We introduce an efficient dynamic scaling algorithm -- CONet -- that automatically optimize channel sizes across network layers for a given CNN.
We conduct experiments on CIFAR10/100 and ImageNet datasets and show that CONet can find efficient and accurate architectures searched in ResNet, DARTS, and DARTS+ spaces.
arXiv Detail & Related papers (2021-08-15T21:48:25Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.