Neural Group Testing to Accelerate Deep Learning
- URL: http://arxiv.org/abs/2011.10704v2
- Date: Sun, 9 May 2021 23:03:47 GMT
- Title: Neural Group Testing to Accelerate Deep Learning
- Authors: Weixin Liang, James Zou
- Abstract summary: Existing work focuses primarily on accelerating each forward pass of a neural network.
We propose neural group testing, which accelerates by testing a group of samples in one forward pass.
We found that neural group testing can group up to 16 images in one forward pass and reduce the overall cost by over 73%.
- Score: 20.09404891618634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning have made the use of large, deep neural
networks with tens of millions of parameters. The sheer size of these networks
imposes a challenging computational burden during inference. Existing work
focuses primarily on accelerating each forward pass of a neural network.
Inspired by the group testing strategy for efficient disease testing, we
propose neural group testing, which accelerates by testing a group of samples
in one forward pass. Groups of samples that test negative are ruled out. If a
group tests positive, samples in that group are then retested adaptively. A key
challenge of neural group testing is to modify a deep neural network so that it
could test multiple samples in one forward pass. We propose three designs to
achieve this without introducing any new parameters and evaluate their
performances. We applied neural group testing in an image moderation task to
detect rare but inappropriate images. We found that neural group testing can
group up to 16 images in one forward pass and reduce the overall computation
cost by over 73% while improving detection performance.
Related papers
- Training Guarantees of Neural Network Classification Two-Sample Tests by Kernel Analysis [58.435336033383145]
We construct and analyze a neural network two-sample test to determine whether two datasets came from the same distribution.
We derive the theoretical minimum training time needed to ensure the NTK two-sample test detects a deviation-level between the datasets.
We show that the statistical power associated with the neural network two-sample test goes to 1 as the neural network training samples and test evaluation samples go to infinity.
arXiv Detail & Related papers (2024-07-05T18:41:16Z) - Benign Overfitting for Two-layer ReLU Convolutional Neural Networks [60.19739010031304]
We establish algorithm-dependent risk bounds for learning two-layer ReLU convolutional neural networks with label-flipping noise.
We show that, under mild conditions, the neural network trained by gradient descent can achieve near-zero training loss and Bayes optimal test risk.
arXiv Detail & Related papers (2023-03-07T18:59:38Z) - Adversarial Sampling for Fairness Testing in Deep Neural Network [0.0]
adversarial sampling to test for fairness in prediction of deep neural network model across different classes of image in a given dataset.
We trained our neural network model on the original image, and without training our model on the perturbed or attacked image.
When we feed the adversarial samplings to our model, it was able to predict the original category/ class of the image the adversarial sample belongs to.
arXiv Detail & Related papers (2023-03-06T03:55:37Z) - TTAPS: Test-Time Adaption by Aligning Prototypes using Self-Supervision [70.05605071885914]
We propose a novel modification of the self-supervised training algorithm SwAV that adds the ability to adapt to single test samples.
We show the success of our method on the common benchmark dataset CIFAR10-C.
arXiv Detail & Related papers (2022-05-18T05:43:06Z) - Revisiting Gaussian Neurons for Online Clustering with Unknown Number of
Clusters [0.0]
A novel local learning rule is presented that performs online clustering with a maximum limit of the number of cluster to be found.
The experimental results demonstrate stability in the learned parameters across a large number of training samples.
arXiv Detail & Related papers (2022-05-02T14:01:40Z) - The Compact Support Neural Network [6.47243430672461]
We present a neuron generalization that has the standard dot-product-based neuron and the RBF neuron as two extreme cases of a shape parameter.
We show how to avoid difficulties in training a neural network with such neurons, by starting with a trained standard neural network and gradually increasing the shape parameter to the desired value.
arXiv Detail & Related papers (2021-04-01T06:08:09Z) - And/or trade-off in artificial neurons: impact on adversarial robustness [91.3755431537592]
Presence of sufficient number of OR-like neurons in a network can lead to classification brittleness and increased vulnerability to adversarial attacks.
We define AND-like neurons and propose measures to increase their proportion in the network.
Experimental results on the MNIST dataset suggest that our approach holds promise as a direction for further exploration.
arXiv Detail & Related papers (2021-02-15T08:19:05Z) - Group Testing with a Graph Infection Spread Model [61.48558770435175]
Infection spreads via connections between individuals and this results in a probabilistic cluster formation structure as well as a non-i.i.d. infection status for individuals.
We propose a class of two-step sampled group testing algorithms where we exploit the known probabilistic infection spread model.
Our results imply that, by exploiting information on the connections of individuals, group testing can be used to reduce the number of required tests significantly even when infection rate is high.
arXiv Detail & Related papers (2021-01-14T18:51:32Z) - Testing for Normality with Neural Networks [0.0]
We construct a feedforward neural network that can successfully detect normal distributions by inspecting small samples from them.
The network's accuracy was higher than 96% on a set of larger samples with 250-1000 elements.
arXiv Detail & Related papers (2020-09-29T07:35:40Z) - Noisy Adaptive Group Testing using Bayesian Sequential Experimental
Design [63.48989885374238]
When the infection prevalence of a disease is low, Dorfman showed 80 years ago that testing groups of people can prove more efficient than testing people individually.
Our goal in this paper is to propose new group testing algorithms that can operate in a noisy setting.
arXiv Detail & Related papers (2020-04-26T23:41:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.