Trimming Feature Extraction and Inference for MCU-based Edge NILM: a
Systematic Approach
- URL: http://arxiv.org/abs/2105.10302v1
- Date: Fri, 21 May 2021 12:08:16 GMT
- Title: Trimming Feature Extraction and Inference for MCU-based Edge NILM: a
Systematic Approach
- Authors: Enrico Tabanelli, Davide Brunelli, Andrea Acquaviva, Luca Benini
- Abstract summary: Non-Intrusive Load Monitoring (NILM) enables the disaggregation of the global power consumption of multiple loads, taken from a single smart electrical meter, into appliance-level details.
State-of-the-Art approaches are based on Machine Learning methods and exploit the fusion of time- and frequency-domain features from current and voltage sensors.
Running low-latency NILM on low-cost, resource-constrained MCU-based meters is currently an open challenge.
This paper addresses the optimization of the feature spaces as well as the computational and storage cost reduction needed for executing State-of-the
- Score: 14.491636333680297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-Intrusive Load Monitoring (NILM) enables the disaggregation of the global
power consumption of multiple loads, taken from a single smart electrical
meter, into appliance-level details. State-of-the-Art approaches are based on
Machine Learning methods and exploit the fusion of time- and frequency-domain
features from current and voltage sensors. Unfortunately, these methods are
compute-demanding and memory-intensive. Therefore, running low-latency NILM on
low-cost, resource-constrained MCU-based meters is currently an open challenge.
This paper addresses the optimization of the feature spaces as well as the
computational and storage cost reduction needed for executing State-of-the-Art
(SoA) NILM algorithms on memory- and compute-limited MCUs. We compare four
supervised learning techniques on different classification scenarios and
characterize the overall NILM pipeline's implementation on a MCU-based Smart
Measurement Node. Experimental results demonstrate that optimizing the feature
space enables edge MCU-based NILM with 95.15% accuracy, resulting in a small
drop compared to the most-accurate feature vector deployment (96.19%) while
achieving up to 5.45x speed-up and 80.56% storage reduction. Furthermore, we
show that low-latency NILM relying only on current measurements reaches almost
80% accuracy, allowing a major cost reduction by removing voltage sensors from
the hardware design.
Related papers
- A Non-Invasive Load Monitoring Method for Edge Computing Based on MobileNetV3 and Dynamic Time Regulation [2.405805395043031]
Methods based on machine learning and deep learning have achieved remarkable results in load decomposition accuracy.
These methods generally suffer from high computational costs and huge memory requirements.
This study proposes an innovative Dynamic Time Warping (DTW) algorithm in the time-frequency domain.
arXiv Detail & Related papers (2025-04-22T06:43:33Z) - On-Sensor Convolutional Neural Networks with Early-Exits [3.916521228619074]
We introduce for the first time in the literature the optimized design and implementation of Depth-First CNNs operating on the Intelligent Sensor Processing Unit (ISPU) within an Inertial Measurement Unit (IMU) by STMicroelectronics.
Our approach partitions the CNN between the ISPU and the microcontroller (MCU) and employs an Early-Exit mechanism to stop the computations on the IMU when enough confidence about the results is achieved.
arXiv Detail & Related papers (2025-03-21T08:31:07Z) - Preventing Non-intrusive Load Monitoring Privacy Invasion: A Precise Adversarial Attack Scheme for Networked Smart Meters [99.90150979732641]
We propose an innovative scheme based on adversarial attack in this paper.
The scheme effectively prevents NILM models from violating appliance-level privacy, while also ensuring accurate billing calculation for users.
Our solutions exhibit transferability, making the generated perturbation signal from one target model applicable to other diverse NILM models.
arXiv Detail & Related papers (2024-12-22T07:06:46Z) - Benchmarking Active Learning for NILM [2.896640219222859]
Non-intrusive load monitoring (NILM) focuses on disaggregating total household power consumption into appliance-specific usage.
Many advanced NILM methods are based on neural networks that typically require substantial amounts of labeled appliance data.
We propose an active learning approach to selectively install appliance monitors in a limited number of houses.
arXiv Detail & Related papers (2024-11-24T12:22:59Z) - Progressive Mixed-Precision Decoding for Efficient LLM Inference [49.05448842542558]
We introduce Progressive Mixed-Precision Decoding (PMPD) to address the memory-boundedness of decoding.
PMPD achieves 1.4$-$12.2$times$ speedup in matrix-vector multiplications over fp16 models.
Our approach delivers a throughput gain of 3.8$-$8.0$times$ over fp16 models and up to 1.54$times$ over uniform quantization approaches.
arXiv Detail & Related papers (2024-10-17T11:46:33Z) - Accelerating TinyML Inference on Microcontrollers through Approximate Kernels [3.566060656925169]
In this work, we combine approximate computing and software kernel design to accelerate the inference of approximate CNN models on microcontrollers.
Our evaluation on an STM32-Nucleo board and 2 popular CNNs trained on the CIFAR-10 dataset shows that, compared to state-of-the-art exact inference, our solutions can feature on average 21% latency reduction.
arXiv Detail & Related papers (2024-09-25T11:10:33Z) - Parameter-Adaptive Approximate MPC: Tuning Neural-Network Controllers without Retraining [50.00291020618743]
This work introduces a novel, parameter-adaptive AMPC architecture capable of online tuning without recomputing large datasets and retraining.
We showcase the effectiveness of parameter-adaptive AMPC by controlling the swing-ups of two different real cartpole systems with a severely resource-constrained microcontroller (MCU)
Taken together, these contributions represent a marked step toward the practical application of AMPC in real-world systems.
arXiv Detail & Related papers (2024-04-08T20:02:19Z) - A Precision-Optimized Fixed-Point Near-Memory Digital Processing Unit
for Analog In-Memory Computing [10.992736723518036]
We propose a Near-Memory digital Processing Unit (NMPU) based on fixed-point arithmetic.
It achieves competitive accuracy and higher computing throughput than previous approaches.
We validate the efficacy of the NMPU by using data from an AIMC chip and demonstrate that a simulated AIMC system with the proposed NMPU outperforms existing FP16-based implementations.
arXiv Detail & Related papers (2024-02-12T10:30:45Z) - Tube-NeRF: Efficient Imitation Learning of Visuomotor Policies from MPC
using Tube-Guided Data Augmentation and NeRFs [42.220568722735095]
Imitation learning (IL) can train computationally-efficient sensorimotor policies from a resource-intensive Model Predictive Controller (MPC)
We propose a data augmentation (DA) strategy that enables efficient learning of vision-based policies.
We show 80-fold increase in demonstration efficiency and a 50% reduction in training time over current IL methods.
arXiv Detail & Related papers (2023-11-23T18:54:25Z) - EPIM: Efficient Processing-In-Memory Accelerators based on Epitome [78.79382890789607]
We introduce the Epitome, a lightweight neural operator offering convolution-like functionality.
On the software side, we evaluate epitomes' latency and energy on PIM accelerators.
We introduce a PIM-aware layer-wise design method to enhance their hardware efficiency.
arXiv Detail & Related papers (2023-11-12T17:56:39Z) - Fast Flux-Activated Leakage Reduction for Superconducting Quantum
Circuits [84.60542868688235]
leakage out of the computational subspace arising from the multi-level structure of qubit implementations.
We present a resource-efficient universal leakage reduction unit for superconducting qubits using parametric flux modulation.
We demonstrate that using the leakage reduction unit in repeated weight-two stabilizer measurements reduces the total number of detected errors in a scalable fashion.
arXiv Detail & Related papers (2023-09-13T16:21:32Z) - Quantized Neural Networks for Low-Precision Accumulation with Guaranteed
Overflow Avoidance [68.8204255655161]
We introduce a quantization-aware training algorithm that guarantees avoiding numerical overflow when reducing the precision of accumulators during inference.
We evaluate our algorithm across multiple quantized models that we train for different tasks, showing that our approach can reduce the precision of accumulators while maintaining model accuracy with respect to a floating-point baseline.
arXiv Detail & Related papers (2023-01-31T02:46:57Z) - Automated Machine Learning: A Case Study on Non-Intrusive Appliance Load Monitoring [81.06807079998117]
We propose a novel approach to enable Automated Machine Learning (AutoML) for Non-Intrusive Appliance Load Monitoring (NIALM)
NIALM offers a cost-effective alternative to smart meters for measuring the energy consumption of electric devices and appliances.
arXiv Detail & Related papers (2022-03-06T10:12:56Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Q-EEGNet: an Energy-Efficient 8-bit Quantized Parallel EEGNet
Implementation for Edge Motor-Imagery Brain--Machine Interfaces [16.381467082472515]
Motor-Imagery Brain--Machine Interfaces (MI-BMIs)promise direct and accessible communication between human brains and machines.
Deep learning models have emerged for classifying EEG signals.
These models often exceed the limitations of edge devices due to their memory and computational requirements.
arXiv Detail & Related papers (2020-04-24T12:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.