Multiplierless In-filter Computing for tinyML Platforms
- URL: http://arxiv.org/abs/2304.11816v1
- Date: Mon, 24 Apr 2023 04:33:44 GMT
- Title: Multiplierless In-filter Computing for tinyML Platforms
- Authors: Abhishek Ramdas Nair, Pallab Kumar Nath, Shantanu Chakrabartty, Chetan
Singh Thakur
- Abstract summary: We present a novel multiplierless framework for in-filter acoustic classification.
We use MP-based approximation for training, including backpropagation mitigating approximation errors.
The framework is more efficient than traditional classification frameworks with just less than 1K slices.
- Score: 6.878219199575747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wildlife conservation using continuous monitoring of environmental factors
and biomedical classification, which generate a vast amount of sensor data, is
a challenge due to limited bandwidth in the case of remote monitoring. It
becomes critical to have classification where data is generated, and only
classified data is used for monitoring. We present a novel multiplierless
framework for in-filter acoustic classification using Margin Propagation (MP)
approximation used in low-power edge devices deployable in remote areas with
limited connectivity. The entire design of this classification framework is
based on template-based kernel machine, which include feature extraction and
inference, and uses basic primitives like addition/subtraction, shift, and
comparator operations, for hardware implementation. Unlike full precision
training methods for traditional classification, we use MP-based approximation
for training, including backpropagation mitigating approximation errors. The
proposed framework is general enough for acoustic classification. However, we
demonstrate the hardware friendliness of this framework by implementing a
parallel Finite Impulse Response (FIR) filter bank in a kernel machine
classifier optimized for a Field Programmable Gate Array (FPGA). The FIR filter
acts as the feature extractor and non-linear kernel for the kernel machine
implemented using MP approximation and a downsampling method to reduce the
order of the filters. The FPGA implementation on Spartan 7 shows that the
MP-approximated in-filter kernel machine is more efficient than traditional
classification frameworks with just less than 1K slices.
Related papers
- Filter Pruning for Efficient CNNs via Knowledge-driven Differential
Filter Sampler [103.97487121678276]
Filter pruning simultaneously accelerates the computation and reduces the memory overhead of CNNs.
We propose a novel Knowledge-driven Differential Filter Sampler(KDFS) with Masked Filter Modeling(MFM) framework for filter pruning.
arXiv Detail & Related papers (2023-07-01T02:28:41Z) - In-filter Computing For Designing Ultra-light Acoustic Pattern
Recognizers [6.335302509003343]
We present a novel in-filter computing framework that can be used for designing ultra-light acoustic classifiers.
The proposed architecture integrates the convolution and nonlinear filtering operations directly into the kernels of a Support Vector Machine.
We show that the system can achieve robust classification performance on benchmark sound recognition tasks using only 1.5k Look-Up Tables (LUTs) and 2.8k Flip-Flops (FFs)
arXiv Detail & Related papers (2021-09-11T08:16:53Z) - Training Compact CNNs for Image Classification using Dynamic-coded
Filter Fusion [139.71852076031962]
We present a novel filter pruning method, dubbed dynamic-coded filter fusion (DCFF)
We derive compact CNNs in a computation-economical and regularization-free manner for efficient image classification.
Our DCFF derives a compact VGGNet-16 with only 72.77M FLOPs and 1.06M parameters while reaching top-1 accuracy of 93.47%.
arXiv Detail & Related papers (2021-07-14T18:07:38Z) - Multiplierless MP-Kernel Machine For Energy-efficient Edge Devices [6.335302509003343]
We present a novel framework for designing multiplierless kernel machines.
The framework uses a piecewise linear (PWL) approximation based on a margin propagation (MP) technique.
We propose a hardware-friendly MP-based inference and online training algorithm that has been optimized for a Field Programmable Gate Array (FPGA) platform.
arXiv Detail & Related papers (2021-06-03T16:06:08Z) - Online Multi-Object Tracking and Segmentation with GMPHD Filter and
Mask-based Affinity Fusion [79.87371506464454]
We propose a fully online multi-object tracking and segmentation (MOTS) method that uses instance segmentation results as an input.
The proposed method is based on the Gaussian mixture probability hypothesis density (GMPHD) filter, a hierarchical data association (HDA), and a mask-based affinity fusion (MAF) model.
In the experiments on the two popular MOTS datasets, the key modules show some improvements.
arXiv Detail & Related papers (2020-08-31T21:06:22Z) - Optimization of data-driven filterbank for automatic speaker
verification [8.175789701289512]
We propose a new data-driven filter design method which optimize filter parameters from a given speech data.
The main advantage of the proposed method is that it requires very limited amount of unlabeled speech-data.
We show that the acoustic features created with proposed filterbank are better than existing mel-frequency cepstral coefficients (MFCCs) and speech-signal-based frequency cepstral coefficients (SFCCs) in most cases.
arXiv Detail & Related papers (2020-07-21T11:42:20Z) - Innovative And Additive Outlier Robust Kalman Filtering With A Robust
Particle Filter [68.8204255655161]
We propose CE-BASS, a particle mixture Kalman filter which is robust to both innovative and additive outliers, and able to fully capture multi-modality in the distribution of the hidden state.
Furthermore, the particle sampling approach re-samples past states, which enables CE-BASS to handle innovative outliers which are not immediately visible in the observations, such as trend changes.
arXiv Detail & Related papers (2020-07-07T07:11:09Z) - Ensemble Wrapper Subsampling for Deep Modulation Classification [70.91089216571035]
Subsampling of received wireless signals is important for relaxing hardware requirements as well as the computational cost of signal processing algorithms.
We propose a subsampling technique to facilitate the use of deep learning for automatic modulation classification in wireless communication systems.
arXiv Detail & Related papers (2020-05-10T06:11:13Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.