MicroNAS: An Automated Framework for Developing a Fall Detection System
- URL: http://arxiv.org/abs/2504.07397v1
- Date: Thu, 10 Apr 2025 02:32:47 GMT
- Title: MicroNAS: An Automated Framework for Developing a Fall Detection System
- Authors: Seyed Mojtaba Mohasel, John Sheppard, Lindsey K. Molina, Richard R. Neptune, Shane R. Wurdeman, Corey A. Pew,
- Abstract summary: MicroNAS is an automated neural architecture search tool specifically designed to create models for microcontrollers with small memory resources.<n>The ESP32 microcontroller, with 320 KB of memory, is used as the target platform.<n>A fall detection system (FDS) for lower-limb amputees is developed as a pilot study.
- Score: 0.13194391758295113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work presents MicroNAS, an automated neural architecture search tool specifically designed to create models optimized for microcontrollers with small memory resources. The ESP32 microcontroller, with 320 KB of memory, is used as the target platform. The artificial intelligence contribution lies in a novel method for optimizing convolutional neural network and gated recurrent unit architectures by considering the memory size of the target microcontroller as a guide. A comparison is made between memory-driven model optimization and traditional two-stage methods, which use pruning, to show the effectiveness of the proposed framework. To demonstrate the engineering application of MicroNAS, a fall detection system (FDS) for lower-limb amputees is developed as a pilot study. A critical challenge in fall detection studies, class imbalance in the dataset, is addressed. The results show that MicroNAS models achieved higher F1-scores than alternative approaches, such as ensemble methods and H2O Automated Machine Learning, presenting a significant step forward in real-time FDS development. Biomechanists using body-worn sensors for activity detection can adopt the open-source code to design machine learning models tailored for microcontroller platforms with limited memory.
Related papers
- AI-ANNE: (A) (N)eural (N)et for (E)xploration: Transferring Deep Learning Models onto Microcontrollers and Embedded Systems [0.0]
This working paper explores the integration of neural networks onto resource-constrained embedded systems like a Raspberry Pi Pico / Raspberry Pi Pico 2.<n>A TinyML aproach transfers neural networks directly on these microcontrollers, enabling real-time, low-latency, and energy-efficient inference.<n>Two different neural networks on microcontrollers are presented for an example of data classification.
arXiv Detail & Related papers (2025-01-01T10:29:55Z) - Autonomous Droplet Microfluidic Design Framework with Large Language Models [0.6827423171182153]
This study presents MicroFluidic-LLMs, a framework designed for processing and feature extraction.
It overcomes processing challenges by transforming the content into a linguistic format and leveraging pre-trained large language models.
We demonstrate that our MicroFluidic-LLMs framework can empower deep neural network models to be highly effective and straightforward.
arXiv Detail & Related papers (2024-11-11T03:20:53Z) - Accelerating TinyML Inference on Microcontrollers through Approximate Kernels [3.566060656925169]
In this work, we combine approximate computing and software kernel design to accelerate the inference of approximate CNN models on microcontrollers.
Our evaluation on an STM32-Nucleo board and 2 popular CNNs trained on the CIFAR-10 dataset shows that, compared to state-of-the-art exact inference, our solutions can feature on average 21% latency reduction.
arXiv Detail & Related papers (2024-09-25T11:10:33Z) - MicroNAS: Memory and Latency Constrained Hardware-Aware Neural
Architecture Search for Time Series Classification on Microcontrollers [3.0723404270319685]
We adapt the concept of differentiable neural architecture search (DNAS) to solve the time-series classification problem on resource-constrained microcontrollers (MCUs)
We introduce MicroNAS, a domain-specific HW-NAS system integration of DNAS, Lookup Tables, dynamic convolutions and a novel search space specifically designed for time-series classification on MCUs.
Our studies on different MCUs and standard benchmark datasets demonstrate that MicroNAS finds MCU-tailored architectures that achieve performance (F1-score) near to state-of-the-art desktop models.
arXiv Detail & Related papers (2023-10-27T06:55:15Z) - MCUFormer: Deploying Vision Transformers on Microcontrollers with
Limited Memory [76.02294791513552]
We propose a hardware-algorithm co-optimizations method called MCUFormer to deploy vision transformers on microcontrollers with extremely limited memory.
Experimental results demonstrate that our MCUFormer achieves 73.62% top-1 accuracy on ImageNet for image classification with 320KB memory.
arXiv Detail & Related papers (2023-10-25T18:00:26Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - DeepPicarMicro: Applying TinyML to Autonomous Cyber Physical Systems [2.2667044691227636]
We present DeepPicarMicro, a small self-driving RC car testbed, which runs a convolutional neural network (CNN) on a Raspberry Pi Pico MCU.
We apply a state-of-the-art DNN optimization to successfully fit the well-known PilotNet CNN architecture.
We observe an interesting relationship between the accuracy, latency, and control performance of a system.
arXiv Detail & Related papers (2022-08-23T21:58:53Z) - MCUNetV2: Memory-Efficient Patch-based Inference for Tiny Deep Learning [72.80896338009579]
We find that the memory bottleneck is due to the imbalanced memory distribution in convolutional neural network (CNN) designs.
We propose a generic patch-by-patch inference scheduling, which significantly cuts down the peak memory.
We automate the process with neural architecture search to jointly optimize the neural architecture and inference scheduling, leading to MCUNetV2.
arXiv Detail & Related papers (2021-10-28T17:58:45Z) - Conservative Objective Models for Effective Offline Model-Based
Optimization [78.19085445065845]
Computational design problems arise in a number of settings, from synthetic biology to computer architectures.
We propose a method that learns a model of the objective function that lower bounds the actual value of the ground-truth objective on out-of-distribution inputs.
COMs are simple to implement and outperform a number of existing methods on a wide range of MBO problems.
arXiv Detail & Related papers (2021-07-14T17:55:28Z) - ALT-MAS: A Data-Efficient Framework for Active Testing of Machine
Learning Algorithms [58.684954492439424]
We propose a novel framework to efficiently test a machine learning model using only a small amount of labeled test data.
The idea is to estimate the metrics of interest for a model-under-test using Bayesian neural network (BNN)
arXiv Detail & Related papers (2021-04-11T12:14:04Z) - MicroNets: Neural Network Architectures for Deploying TinyML
Applications on Commodity Microcontrollers [18.662026553041937]
Machine learning on resource constrained microcontrollers (MCUs) promises to drastically expand the application space of the Internet of Things (IoT)
TinyML presents severe technical challenges, as deep neural network inference demands a large compute and memory budget.
neural architecture search (NAS) promises to help design accurate ML models that meet the tight MCU memory, latency and energy constraints.
arXiv Detail & Related papers (2020-10-21T19:39:39Z) - DrNAS: Dirichlet Neural Architecture Search [88.56953713817545]
We treat the continuously relaxed architecture mixing weight as random variables, modeled by Dirichlet distribution.
With recently developed pathwise derivatives, the Dirichlet parameters can be easily optimized with gradient-based generalization.
To alleviate the large memory consumption of differentiable NAS, we propose a simple yet effective progressive learning scheme.
arXiv Detail & Related papers (2020-06-18T08:23:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.