Small-footprint slimmable networks for keyword spotting
- URL: http://arxiv.org/abs/2304.12183v1
- Date: Fri, 21 Apr 2023 12:59:37 GMT
- Title: Small-footprint slimmable networks for keyword spotting
- Authors: Zuhaib Akhtar, Mohammad Omar Khursheed, Dongsu Du, Yuzong Liu
- Abstract summary: We show that slimmable neural networks allow us to create super-nets from Convolutioanl Neural Networks and Transformers.
We demonstrate the usefulness of these models on in-house Alexa data and Google Speech Commands, and focus our efforts on models for the on-device use case.
- Score: 3.0825815617887415
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we present Slimmable Neural Networks applied to the problem of
small-footprint keyword spotting. We show that slimmable neural networks allow
us to create super-nets from Convolutioanl Neural Networks and Transformers,
from which sub-networks of different sizes can be extracted. We demonstrate the
usefulness of these models on in-house Alexa data and Google Speech Commands,
and focus our efforts on models for the on-device use case, limiting ourselves
to less than 250k parameters. We show that slimmable models can match (and in
some cases, outperform) models trained from scratch. Slimmable neural networks
are therefore a class of models particularly useful when the same functionality
is to be replicated at different memory and compute budgets, with different
accuracy requirements.
Related papers
- Residual Random Neural Networks [0.0]
Single-layer feedforward neural network with random weights is a recurring motif in the neural networks literature.
We show that one can obtain good classification results even if the number of hidden neurons has the same order of magnitude as the dimensionality of the data samples.
arXiv Detail & Related papers (2024-10-25T22:00:11Z) - Towards Scalable and Versatile Weight Space Learning [51.78426981947659]
This paper introduces the SANE approach to weight-space learning.
Our method extends the idea of hyper-representations towards sequential processing of subsets of neural network weights.
arXiv Detail & Related papers (2024-06-14T13:12:07Z) - Neural Network Parameter Diffusion [50.85251415173792]
Diffusion models have achieved remarkable success in image and video generation.
In this work, we demonstrate that diffusion models can also.
generate high-performing neural network parameters.
arXiv Detail & Related papers (2024-02-20T16:59:03Z) - LowDINO -- A Low Parameter Self Supervised Learning Model [0.0]
This research aims to explore the possibility of designing a neural network architecture that allows for small networks to adopt the properties of huge networks.
Previous studies have shown that using convolutional neural networks (ConvNets) can provide inherent inductive bias.
To reduce the number of parameters, attention mechanisms are utilized through the usage of MobileViT blocks.
arXiv Detail & Related papers (2023-05-28T18:34:59Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Recurrent neural networks that generalize from examples and optimize by
dreaming [0.0]
We introduce a generalized Hopfield network where pairwise couplings between neurons are built according to Hebb's prescription for on-line learning.
We let the network experience solely a dataset made of a sample of noisy examples for each pattern.
Remarkably, the sleeping mechanisms always significantly reduce the dataset size required to correctly generalize.
arXiv Detail & Related papers (2022-04-17T08:40:54Z) - Network Augmentation for Tiny Deep Learning [73.57192520534585]
We introduce Network Augmentation (NetAug), a new training method for improving the performance of tiny neural networks.
We demonstrate the effectiveness of NetAug on image classification and object detection.
arXiv Detail & Related papers (2021-10-17T18:48:41Z) - Reservoir Memory Machines as Neural Computers [70.5993855765376]
Differentiable neural computers extend artificial neural networks with an explicit memory without interference.
We achieve some of the computational capabilities of differentiable neural computers with a model that can be trained very efficiently.
arXiv Detail & Related papers (2020-09-14T12:01:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.