Data Aware Differentiable Neural Architecture Search for Tiny Keyword Spotting Applications
- URL: http://arxiv.org/abs/2507.15545v1
- Date: Mon, 21 Jul 2025 12:18:38 GMT
- Title: Data Aware Differentiable Neural Architecture Search for Tiny Keyword Spotting Applications
- Authors: Yujia Shi, Emil Njor, Pablo MartÃnez-Nuevo, Sven Ewan Shepstone, Xenofon Fafoutis,
- Abstract summary: We introduce "Data Aware Differentiable Neural Architecture Search"<n>Our approach expands the search space to include data configuration parameters alongside architectural choices.<n>This enables Data Aware Differentiable Neural Architecture Search to co-optimize model architecture and input data characteristics.
- Score: 1.88743314507114
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The success of Machine Learning is increasingly tempered by its significant resource footprint, driving interest in efficient paradigms like TinyML. However, the inherent complexity of designing TinyML systems hampers their broad adoption. To reduce this complexity, we introduce "Data Aware Differentiable Neural Architecture Search". Unlike conventional Differentiable Neural Architecture Search, our approach expands the search space to include data configuration parameters alongside architectural choices. This enables Data Aware Differentiable Neural Architecture Search to co-optimize model architecture and input data characteristics, effectively balancing resource usage and system performance for TinyML applications. Initial results on keyword spotting demonstrate that this novel approach to TinyML system design can generate lean but highly accurate systems.
Related papers
- ZeroLM: Data-Free Transformer Architecture Search for Language Models [54.83882149157548]
Current automated proxy discovery approaches suffer from extended search times, susceptibility to data overfitting, and structural complexity.<n>This paper introduces a novel zero-cost proxy methodology that quantifies model capacity through efficient weight statistics.<n>Our evaluation demonstrates the superiority of this approach, achieving a Spearman's rho of 0.76 and Kendall's tau of 0.53 on the FlexiBERT benchmark.
arXiv Detail & Related papers (2025-03-24T13:11:22Z) - Fast Data Aware Neural Architecture Search via Supernet Accelerated Evaluation [0.43550340493919387]
Tiny machine learning (TinyML) promises to revolutionize fields such as healthcare, environmental monitoring, and industrial maintenance.<n>The complex optimizations required for successful TinyML deployment continue to impede its widespread adoption.<n>We propose a new state-of-the-art Data Aware Neural Architecture Search technique and demonstrate its effectiveness on the novel TinyML VisionWake' dataset.
arXiv Detail & Related papers (2025-02-18T09:51:03Z) - EM-DARTS: Hierarchical Differentiable Architecture Search for Eye Movement Recognition [20.209756662832365]
Differentiable Neural Architecture Search (DARTS) automates the manual process of architecture design with high search efficiency.<n>We propose EM-DARTS, a hierarchical differentiable architecture search algorithm to automatically design the DL architecture for eye movement recognition.<n>We show that EM-DARTS is capable of producing an optimal architecture that leads to state-of-the-art recognition performance.
arXiv Detail & Related papers (2024-09-22T13:11:08Z) - AutoTransfer: AutoML with Knowledge Transfer -- An Application to Graph
Neural Networks [75.11008617118908]
AutoML techniques consider each task independently from scratch, leading to high computational cost.
Here we propose AutoTransfer, an AutoML solution that improves search efficiency by transferring the prior architectural design knowledge to the novel task of interest.
arXiv Detail & Related papers (2023-03-14T07:23:16Z) - Multi-Agent Reinforcement Learning for Microprocessor Design Space
Exploration [71.95914457415624]
Microprocessor architects are increasingly resorting to domain-specific customization in the quest for high-performance and energy-efficiency.
We propose an alternative formulation that leverages Multi-Agent RL (MARL) to tackle this problem.
Our evaluation shows that the MARL formulation consistently outperforms single-agent RL baselines.
arXiv Detail & Related papers (2022-11-29T17:10:24Z) - Efficient Search of Multiple Neural Architectures with Different
Complexities via Importance Sampling [3.759936323189417]
This study focuses on the architecture complexity-aware one-shot NAS that optimize the objective function composed of the weighted sum of two metrics.
The proposed method is applied to the architecture search of convolutional neural networks on the CIAFR-10 and ImageNet datasets.
arXiv Detail & Related papers (2022-07-21T07:06:03Z) - Revealing the Invisible with Model and Data Shrinking for
Composite-database Micro-expression Recognition [49.463864096615254]
We analyze the influence of learning complexity, including the input complexity and model complexity.
We propose a recurrent convolutional network (RCN) to explore the shallower-architecture and lower-resolution input data.
We develop three parameter-free modules to integrate with RCN without increasing any learnable parameters.
arXiv Detail & Related papers (2020-06-17T06:19:24Z) - When Residual Learning Meets Dense Aggregation: Rethinking the
Aggregation of Deep Neural Networks [57.0502745301132]
We propose Micro-Dense Nets, a novel architecture with global residual learning and local micro-dense aggregations.
Our micro-dense block can be integrated with neural architecture search based models to boost their performance.
arXiv Detail & Related papers (2020-04-19T08:34:52Z) - NAS-Count: Counting-by-Density with Neural Architecture Search [74.92941571724525]
We automate the design of counting models with Neural Architecture Search (NAS)
We introduce an end-to-end searched encoder-decoder architecture, Automatic Multi-Scale Network (AMSNet)
arXiv Detail & Related papers (2020-02-29T09:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.