Hidden costs for inference with deep network on embedded system devices
- URL: http://arxiv.org/abs/2601.01698v1
- Date: Mon, 05 Jan 2026 00:18:51 GMT
- Title: Hidden costs for inference with deep network on embedded system devices
- Authors: Chankyu Lee, Woohyun Choi, Sangwook Park,
- Abstract summary: This study evaluates the inference performance of various deep learning models under an embedded system environment.<n>Results highlight the importance of considering additional computations between tensors when optimizing deep learning models for real-time performing in embedded systems.
- Score: 7.268322987525604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study evaluates the inference performance of various deep learning models under an embedded system environment. In previous works, Multiply-Accumulate operation is typically used to measure computational load of a deep model. According to this study, however, this metric has a limitation to estimate inference time on embedded devices. This paper poses the question of what aspects are overlooked when expressed in terms of Multiply-Accumulate operations. In experiments, an image classification task is performed on an embedded system device using the CIFAR-100 dataset to compare and analyze the inference times of ten deep models with the theoretically calculated Multiply-Accumulate operations for each model. The results highlight the importance of considering additional computations between tensors when optimizing deep learning models for real-time performing in embedded systems.
Related papers
- Learning-Augmented Moment Estimation on Time-Decay Models [55.06256430461023]
We use an oracle for the heavy-hitters of datasets to give learning-augmented algorithms for a number of fundamental problems.<n>We complement our theoretical results with a number of empirical evaluations that demonstrate the practical efficiency of our algorithms on real and synthetic datasets.
arXiv Detail & Related papers (2026-03-03T00:42:34Z) - Efficient Machine Unlearning via Influence Approximation [75.31015485113993]
Influence-based unlearning has emerged as a prominent approach to estimate the impact of individual training samples on model parameters without retraining.<n>This paper establishes a theoretical link between memorizing (incremental learning) and forgetting (unlearning)<n>We introduce the Influence Approximation Unlearning algorithm for efficient machine unlearning from the incremental perspective.
arXiv Detail & Related papers (2025-07-31T05:34:27Z) - Exploring Training and Inference Scaling Laws in Generative Retrieval [50.82554729023865]
Generative retrieval reformulates retrieval as an autoregressive generation task, where large language models generate target documents directly from a query.<n>We systematically investigate training and inference scaling laws in generative retrieval, exploring how model size, training data scale, and inference-time compute jointly influence performance.
arXiv Detail & Related papers (2025-03-24T17:59:03Z) - Revisiting SMoE Language Models by Evaluating Inefficiencies with Task Specific Expert Pruning [78.72226641279863]
Sparse Mixture of Expert (SMoE) models have emerged as a scalable alternative to dense models in language modeling.
Our research explores task-specific model pruning to inform decisions about designing SMoE architectures.
We introduce an adaptive task-aware pruning technique UNCURL to reduce the number of experts per MoE layer in an offline manner post-training.
arXiv Detail & Related papers (2024-09-02T22:35:03Z) - A Comparative Study of Machine Learning Models Predicting Energetics of Interacting Defects [5.574191640970887]
We present a comparative study of three different methods to predict the free energy change of systems with interacting defects.
Our findings indicate that the cluster expansion model can achieve precise energetics predictions even with this limited dataset.
This research provide a preliminary evaluation of applying machine learning techniques in imperfect surface systems.
arXiv Detail & Related papers (2024-03-20T02:15:48Z) - Robustness and Generalization Performance of Deep Learning Models on
Cyber-Physical Systems: A Comparative Study [71.84852429039881]
Investigation focuses on the models' ability to handle a range of perturbations, such as sensor faults and noise.
We test the generalization and transfer learning capabilities of these models by exposing them to out-of-distribution (OOD) samples.
arXiv Detail & Related papers (2023-06-13T12:43:59Z) - A Generic Performance Model for Deep Learning in a Distributed
Environment [0.7829352305480285]
We propose a generic performance model of an application in a distributed environment with a generic expression of the application execution time.
We have evaluated the proposed model on three deep learning frameworks (i.e., MXnet, and Pytorch)
arXiv Detail & Related papers (2023-05-19T13:30:34Z) - Leveraging the structure of dynamical systems for data-driven modeling [111.45324708884813]
We consider the impact of the training set and its structure on the quality of the long-term prediction.
We show how an informed design of the training set, based on invariants of the system and the structure of the underlying attractor, significantly improves the resulting models.
arXiv Detail & Related papers (2021-12-15T20:09:20Z) - Using Graph Neural Networks to model the performance of Deep Neural
Networks [2.1151356984322307]
We develop a novel performance model that adopts a graph representation.
Experimental evaluation shows a 7:75x and 12x reduction in prediction error compared to the Halide and TVM models, respectively.
arXiv Detail & Related papers (2021-08-27T20:20:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.