CLAID: Closing the Loop on AI & Data Collection -- A Cross-Platform
Transparent Computing Middleware Framework for Smart Edge-Cloud and Digital
Biomarker Applications
- URL: http://arxiv.org/abs/2310.05643v1
- Date: Mon, 9 Oct 2023 11:56:51 GMT
- Title: CLAID: Closing the Loop on AI & Data Collection -- A Cross-Platform
Transparent Computing Middleware Framework for Smart Edge-Cloud and Digital
Biomarker Applications
- Authors: Patrick Langer, Elgar Fleisch and Filipe Barata
- Abstract summary: We present CLAID, an open-source framework based on transparent computing compatible with Android, iOS, WearOS, Linux, and Windows.
We provide modules for data collection from various sensors as well as for the deployment of machine-learning models.
We propose a novel methodology, "ML-Model in the Loop," for verifying deployed machine learning models.
- Score: 2.953239144917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The increasing number of edge devices with enhanced sensing capabilities,
such as smartphones, wearables, and IoT devices equipped with sensors, holds
the potential for innovative smart-edge applications in healthcare. These
devices generate vast amounts of multimodal data, enabling the implementation
of digital biomarkers which can be leveraged by machine learning solutions to
derive insights, predict health risks, and allow personalized interventions.
Training these models requires collecting data from edge devices and
aggregating it in the cloud. To validate and verify those models, it is
essential to utilize them in real-world scenarios and subject them to testing
using data from diverse cohorts. Since some models are too computationally
expensive to be run on edge devices directly, a collaborative framework between
the edge and cloud becomes necessary. In this paper, we present CLAID, an
open-source cross-platform middleware framework based on transparent computing
compatible with Android, iOS, WearOS, Linux, macOS, and Windows. CLAID enables
logical integration of devices running different operating systems into an
edge-cloud system, facilitating communication and offloading between them, with
bindings available in different programming languages. We provide Modules for
data collection from various sensors as well as for the deployment of
machine-learning models. Furthermore, we propose a novel methodology, "ML-Model
in the Loop" for verifying deployed machine learning models, which helps to
analyze problems that may occur during the migration of models from cloud to
edge devices. We verify our framework in three different experiments and
achieve 100% sampling coverage for data collection across different sensors as
well as an equal performance of a cough detection model deployed on both
Android and iOS devices. We evaluate the memory and battery consumption of our
framework.
Related papers
- On-Device Language Models: A Comprehensive Review [26.759861320845467]
Review examines the challenges of deploying computationally expensive large language models on resource-constrained devices.
Paper investigates on-device language models, their efficient architectures, as well as state-of-the-art compression techniques.
Case studies of on-device language models from major mobile manufacturers demonstrate real-world applications and potential benefits.
arXiv Detail & Related papers (2024-08-26T03:33:36Z) - EdgeConvEns: Convolutional Ensemble Learning for Edge Intelligence [0.0]
Deep edge intelligence aims to deploy deep learning models that demand computationally expensive training in the edge network with limited computational power.
This study proposes a convolutional ensemble learning approach, coined EdgeConvEns, that facilitates training heterogeneous weak models on edge and learning to ensemble them where data on edge are heterogeneously distributed.
arXiv Detail & Related papers (2023-07-25T20:07:32Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - MetaNetwork: A Task-agnostic Network Parameters Generation Framework for
Improving Device Model Generalization [65.02542875281233]
We propose a novel task-agnostic framework, named MetaNetwork, for generating adaptive device model parameters from cloud without on-device training.
The MetaGenerator is designed to learn a mapping function from samples to model parameters, and it can generate and deliver the adaptive parameters to the device based on samples uploaded from the device to the cloud.
The MetaStabilizer aims to reduce the oscillation of the MetaGenerator, accelerate the convergence and improve the model performance during both training and inference.
arXiv Detail & Related papers (2022-09-12T13:26:26Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - SOLIS -- The MLOps journey from data acquisition to actionable insights [62.997667081978825]
In this paper we present a unified deployment pipeline and freedom-to-operate approach that supports all requirements while using basic cross-platform tensor framework and script language engines.
This approach however does not supply the needed procedures and pipelines for the actual deployment of machine learning capabilities in real production grade systems.
arXiv Detail & Related papers (2021-12-22T14:45:37Z) - Complexity-aware Adaptive Training and Inference for Edge-Cloud
Distributed AI Systems [9.273593723275544]
IoT and machine learning applications create large amounts of data that require real-time processing.
We propose a distributed AI system to exploit both the edge and the cloud for training and inference.
arXiv Detail & Related papers (2021-09-14T05:03:54Z) - ESAI: Efficient Split Artificial Intelligence via Early Exiting Using
Neural Architecture Search [6.316693022958222]
Deep neural networks have been outperforming conventional machine learning algorithms in many computer vision-related tasks.
The majority of devices are harnessing the cloud computing methodology in which outstanding deep learning models are responsible for analyzing the data on the server.
In this paper, a new framework for deploying on IoT devices has been proposed which can take advantage of both the cloud and the on-device models.
arXiv Detail & Related papers (2021-06-21T04:47:53Z) - Device-Cloud Collaborative Learning for Recommendation [50.01289274123047]
We propose a novel MetaPatch learning approach on the device side to efficiently achieve "thousands of people with thousands of models" given a centralized cloud model.
With billions of updated personalized device models, we propose a "model-over-models" distillation algorithm, namely MoMoDistill, to update the centralized cloud model.
arXiv Detail & Related papers (2021-04-14T05:06:59Z) - An On-Device Federated Learning Approach for Cooperative Model Update
between Edge Devices [2.99321624683618]
A neural-network based on-device learning approach is recently proposed, so that edge devices train incoming data at runtime to update their model.
In this paper, we focus on OS-ELM to sequentially train a model based on recent samples and combine it with autoencoder for anomaly detection.
We extend it for an on-device federated learning so that edge devices can exchange their trained results and update their model by using those collected from the other edge devices.
arXiv Detail & Related papers (2020-02-27T18:15:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.