Tiny Transformers for Environmental Sound Classification at the Edge
- URL: http://arxiv.org/abs/2103.12157v1
- Date: Mon, 22 Mar 2021 20:12:15 GMT
- Title: Tiny Transformers for Environmental Sound Classification at the Edge
- Authors: David Elliott, Carlos E. Otero, Steven Wyatt, Evan Martino
- Abstract summary: This work presents training techniques for audio models in the field of environmental sound classification at the edge.
Specifically, we design and train Transformers to classify office sounds in audio clips.
Results show that a BERT-based Transformer, trained on Mel spectrograms, can outperform a CNN using 99.85% fewer parameters.
- Score: 0.6193838300896449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the growth of the Internet of Things and the rise of Big Data, data
processing and machine learning applications are being moved to cheap and low
size, weight, and power (SWaP) devices at the edge, often in the form of mobile
phones, embedded systems, or microcontrollers. The field of Cyber-Physical
Measurements and Signature Intelligence (MASINT) makes use of these devices to
analyze and exploit data in ways not otherwise possible, which results in
increased data quality, increased security, and decreased bandwidth. However,
methods to train and deploy models at the edge are limited, and models with
sufficient accuracy are often too large for the edge device. Therefore, there
is a clear need for techniques to create efficient AI/ML at the edge. This work
presents training techniques for audio models in the field of environmental
sound classification at the edge. Specifically, we design and train
Transformers to classify office sounds in audio clips. Results show that a
BERT-based Transformer, trained on Mel spectrograms, can outperform a CNN using
99.85% fewer parameters. To achieve this result, we first tested several audio
feature extraction techniques designed for Transformers, using ESC-50 for
evaluation, along with various augmentations. Our final model outperforms the
state-of-the-art MFCC-based CNN on the office sounds dataset, using just over
6,000 parameters -- small enough to run on a microcontroller.
Related papers
- A lightweight residual network for unsupervised deformable image registration [2.7309692684728617]
We propose a residual U-Net with embedded parallel dilated-convolutional blocks to enhance the receptive field.
The proposed method is evaluated on inter-patient and atlas-based datasets.
arXiv Detail & Related papers (2024-06-14T07:20:49Z) - Quantized Transformer Language Model Implementations on Edge Devices [1.2979415757860164]
Large-scale transformer-based models like the Bidirectional Representations from Transformers (BERT) are widely used for Natural Language Processing (NLP) applications.
These models are initially pre-trained with a large corpus with millions of parameters and then fine-tuned for a downstream NLP task.
One of the major limitations of these large-scale models is that they cannot be deployed on resource-constrained devices due to their large model size and increased inference latency.
arXiv Detail & Related papers (2023-10-06T01:59:19Z) - Deformable Mixer Transformer with Gating for Multi-Task Learning of
Dense Prediction [126.34551436845133]
CNNs and Transformers have their own advantages and both have been widely used for dense prediction in multi-task learning (MTL)
We present a novel MTL model by combining both merits of deformable CNN and query-based Transformer with shared gating for multi-task learning of dense prediction.
arXiv Detail & Related papers (2023-08-10T17:37:49Z) - CHAPTER: Exploiting Convolutional Neural Network Adapters for
Self-supervised Speech Models [62.60723685118747]
Self-supervised learning (SSL) is a powerful technique for learning representations from unlabeled data.
We propose an efficient tuning method specifically designed for SSL speech model, by applying CNN adapters at the feature extractor.
We empirically found that adding CNN to the feature extractor can help the adaptation on emotion and speaker tasks.
arXiv Detail & Related papers (2022-12-01T08:50:12Z) - Learning General Audio Representations with Large-Scale Training of
Patchout Audio Transformers [6.002503434201551]
We study the use of audio transformers trained on large-scale datasets to learn general-purpose representations.
Our results show that representations extracted by audio transformers outperform CNN representations.
arXiv Detail & Related papers (2022-11-25T08:39:12Z) - Efficient Large-scale Audio Tagging via Transformer-to-CNN Knowledge
Distillation [6.617487928813374]
We propose a training procedure for efficient CNNs based on offline Knowledge Distillation (KD) from high-performing yet complex transformers.
We provide models of different complexity levels, scaling from low-complexity models up to a new state-of-the-art performance of.483 mAP on AudioSet.
arXiv Detail & Related papers (2022-11-09T09:58:22Z) - High Fidelity Neural Audio Compression [92.4812002532009]
We introduce a state-of-the-art real-time, high-fidelity, audio leveraging neural networks.
It consists in a streaming encoder-decoder architecture with quantized latent space trained in an end-to-end fashion.
We simplify and speed-up the training by using a single multiscale spectrogram adversary.
arXiv Detail & Related papers (2022-10-24T17:52:02Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Escaping the Big Data Paradigm with Compact Transformers [7.697698018200631]
We show for the first time that with the right size and tokenization, transformers can perform head-to-head with state-of-the-art CNNs on small datasets.
Our method is flexible in terms of model size, and can have as little as 0.28M parameters and achieve reasonable results.
arXiv Detail & Related papers (2021-04-12T17:58:56Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.