Towards Implementing Energy-aware Data-driven Intelligence for Smart
Health Applications on Mobile Platforms
- URL: http://arxiv.org/abs/2302.00514v1
- Date: Wed, 1 Feb 2023 15:34:24 GMT
- Title: Towards Implementing Energy-aware Data-driven Intelligence for Smart
Health Applications on Mobile Platforms
- Authors: G. Dumindu Samaraweera, Hung Nguyen, Hadi Zanddizari, Behnam Zeinali,
and J. Morris Chang
- Abstract summary: On-device deep learning frameworks are proficient in utilizing computing resources in mobile platforms seamlessly.
However, energy resources in a mobile device are typically limited.
We introduce a new framework through an energy-aware, adaptive model comprehension and realization.
- Score: 4.648824029505978
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent breakthrough technological progressions of powerful mobile computing
resources such as low-cost mobile GPUs along with cutting-edge, open-source
software architectures have enabled high-performance deep learning on mobile
platforms. These advancements have revolutionized the capabilities of today's
mobile applications in different dimensions to perform data-driven intelligence
locally, particularly for smart health applications. Unlike traditional machine
learning (ML) architectures, modern on-device deep learning frameworks are
proficient in utilizing computing resources in mobile platforms seamlessly, in
terms of producing highly accurate results in less inference time. However, on
the flip side, energy resources in a mobile device are typically limited.
Hence, whenever a complex Deep Neural Network (DNN) architecture is fed into
the on-device deep learning framework, while it achieves high prediction
accuracy (and performance), it also urges huge energy demands during the
runtime. Therefore, managing these resources efficiently within the spectrum of
performance and energy efficiency is the newest challenge for any mobile
application featuring data-driven intelligence beyond experimental evaluations.
In this paper, first, we provide a timely review of recent advancements in
on-device deep learning while empirically evaluating the performance metrics of
current state-of-the-art ML architectures and conventional ML approaches with
the emphasis given on energy characteristics by deploying them on a smart
health application. With that, we are introducing a new framework through an
energy-aware, adaptive model comprehension and realization (EAMCR) approach
that can be utilized to make more robust and efficient inference decisions
based on the available computing/energy resources in the mobile device during
the runtime.
Related papers
- Deep Learning Inference on Heterogeneous Mobile Processors: Potentials and Pitfalls [22.49750818224266]
A growing demand to deploy computation-intensive deep learning (DL) models on resource-constrained mobile devices for real-time intelligent applications.
Mobile devices hold potential to accelerate DL inference via parallel execution across heterogeneous processors.
This paper presents a holistic empirical study to assess the capabilities and challenges associated with parallel DL inference on heterogeneous mobile processors.
arXiv Detail & Related papers (2024-05-03T04:47:23Z) - Computation-efficient Deep Learning for Computer Vision: A Survey [121.84121397440337]
Deep learning models have reached or even exceeded human-level performance in a range of visual perception tasks.
Deep learning models usually demand significant computational resources, leading to impractical power consumption, latency, or carbon emissions in real-world scenarios.
New research focus is computationally efficient deep learning, which strives to achieve satisfactory performance while minimizing the computational cost during inference.
arXiv Detail & Related papers (2023-08-27T03:55:28Z) - EPAM: A Predictive Energy Model for Mobile AI [6.451060076703027]
We introduce a comprehensive study of mobile AI applications considering different deep neural network (DNN) models and processing sources.
We measure the latency, energy consumption, and memory usage of all the models using four processing sources.
Our study highlights important insights, such as how mobile AI behaves in different applications (vision and non-vision) using CPU, GPU, and NNAPI.
arXiv Detail & Related papers (2023-03-02T09:11:23Z) - Trends in Energy Estimates for Computing in AI/Machine Learning
Accelerators, Supercomputers, and Compute-Intensive Applications [3.2634122554914]
We examine the computational energy requirements of different systems driven by the geometrical scaling law.
We show that energy efficiency due to geometrical scaling is slowing down.
At the application level, general-purpose AI-ML methods can be computationally energy intensive.
arXiv Detail & Related papers (2022-10-12T16:14:33Z) - Enable Deep Learning on Mobile Devices: Methods, Systems, and
Applications [46.97774949613859]
Deep neural networks (DNNs) have achieved unprecedented success in the field of artificial intelligence (AI)
However, their superior performance comes at the considerable cost of computational complexity.
This paper provides an overview of efficient deep learning methods, systems and applications.
arXiv Detail & Related papers (2022-04-25T16:52:48Z) - Reconfigurable Intelligent Surface Assisted Mobile Edge Computing with
Heterogeneous Learning Tasks [53.1636151439562]
Mobile edge computing (MEC) provides a natural platform for AI applications.
We present an infrastructure to perform machine learning tasks at an MEC with the assistance of a reconfigurable intelligent surface (RIS)
Specifically, we minimize the learning error of all participating users by jointly optimizing transmit power of mobile users, beamforming vectors of the base station, and the phase-shift matrix of the RIS.
arXiv Detail & Related papers (2020-12-25T07:08:50Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z) - A Data and Compute Efficient Design for Limited-Resources Deep Learning [68.55415606184]
equivariant neural networks have gained increased interest in the deep learning community.
They have been successfully applied in the medical domain where symmetries in the data can be effectively exploited to build more accurate and robust models.
Mobile, on-device implementations of deep learning solutions have been developed for medical applications.
However, equivariant models are commonly implemented using large and computationally expensive architectures, not suitable to run on mobile devices.
In this work, we design and test an equivariant version of MobileNetV2 and further optimize it with model quantization to enable more efficient inference.
arXiv Detail & Related papers (2020-04-21T00:49:11Z) - Deep Learning based Pedestrian Inertial Navigation: Methods, Dataset and
On-Device Inference [49.88536971774444]
Inertial measurements units (IMUs) are small, cheap, energy efficient, and widely employed in smart devices and mobile robots.
Exploiting inertial data for accurate and reliable pedestrian navigation supports is a key component for emerging Internet-of-Things applications and services.
We present and release the Oxford Inertial Odometry dataset (OxIOD), a first-of-its-kind public dataset for deep learning based inertial navigation research.
arXiv Detail & Related papers (2020-01-13T04:41:54Z) - Resource-Efficient Neural Networks for Embedded Systems [23.532396005466627]
We provide an overview of the current state of the art of machine learning techniques.
We focus on resource-efficient inference based on deep neural networks (DNNs), the predominant machine learning models of the past decade.
We substantiate our discussion with experiments on well-known benchmark data sets using compression techniques.
arXiv Detail & Related papers (2020-01-07T14:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.