An Ensemble Mobile-Cloud Computing Method for Affordable and Accurate
Glucometer Readout
- URL: http://arxiv.org/abs/2301.01758v1
- Date: Wed, 4 Jan 2023 18:48:53 GMT
- Title: An Ensemble Mobile-Cloud Computing Method for Affordable and Accurate
Glucometer Readout
- Authors: Navidreza Asadi, Maziar Goudarzi
- Abstract summary: We present an ensemble learning algorithm, a mobile-cloud computing service architecture, and a simple compression technique to achieve higher availability and faster response time.
Our proposed method achieves 92.1% and 97.7% accuracy on two different datasets, improving previous methods by 40%, (2) reducing required bandwidth by 45x with 1% drop in accuracy, (3) and providing better availability compared to mobile-only, cloud-only, split computing, and early exit service models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite essential efforts towards advanced wireless medical devices for
regular monitoring of blood properties, many such devices are not available or
not affordable for everyone in many countries. Alternatively using ordinary
devices, patients ought to log data into a mobile health-monitoring manually.
It causes several issues: (1) clients reportedly tend to enter unrealistic
data; (2) typing values several times a day is bothersome and causes clients to
leave the mobile app. Thus, there is a strong need to use now-ubiquitous
smartphones, reducing error by capturing images from the screen of medical
devices and extracting useful information automatically. Nevertheless, there
are a few challenges in its development: (1) data scarcity has led to
impractical methods with very low accuracy: to our knowledge, only small
datasets are available in this case; (2) accuracy-availability tradeoff: one
can execute a less accurate algorithm on a mobile phone to maintain higher
availability, or alternatively deploy a more accurate and more
compute-intensive algorithm on the cloud, however, at the cost of lower
availability in poor/no connectivity situations. We present an ensemble
learning algorithm, a mobile-cloud computing service architecture, and a simple
compression technique to achieve higher availability and faster response time
while providing higher accuracy by integrating cloud- and mobile-side
predictions. Additionally, we propose an algorithm to generate synthetic
training data which facilitates utilizing deep learning models to improve
accuracy. Our proposed method achieves three main objectives: (1) 92.1% and
97.7% accuracy on two different datasets, improving previous methods by 40%,
(2) reducing required bandwidth by 45x with 1% drop in accuracy, (3) and
providing better availability compared to mobile-only, cloud-only, split
computing, and early exit service models.
Related papers
- Camouflage is all you need: Evaluating and Enhancing Language Model
Robustness Against Camouflage Adversarial Attacks [53.87300498478744]
Adversarial attacks represent a substantial challenge in Natural Language Processing (NLP)
This study undertakes a systematic exploration of this challenge in two distinct phases: vulnerability evaluation and resilience enhancement.
Results suggest a trade-off between performance and robustness, with some models maintaining similar performance while gaining robustness.
arXiv Detail & Related papers (2024-02-15T10:58:22Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - On-Device Training Under 256KB Memory [62.95579393237751]
We propose an algorithm-system co-design framework to make on-device training possible with only 256KB of memory.
Our framework is the first solution to enable tiny on-device training of convolutional neural networks under 256KB and 1MB Flash.
arXiv Detail & Related papers (2022-06-30T17:59:08Z) - Enhancement of Healthcare Data Performance Metrics using Neural Network
Machine Learning Algorithms [0.3058685580689604]
There is a trade-off between efficiency and accuracy which can be controlled by adjusting the sampling and transmission rates.
This paper demonstrates that machine learning can be used to analyse complex health data metrics.
The Levenbery-Marquardt algorithm was the best performer with an efficiency of 3.33 and accuracy of 79.17%.
arXiv Detail & Related papers (2022-01-16T04:08:07Z) - Revisiting Point Cloud Simplification: A Learnable Feature Preserving
Approach [57.67932970472768]
Mesh and Point Cloud simplification methods aim to reduce the complexity of 3D models while retaining visual quality and relevant salient features.
We propose a fast point cloud simplification method by learning to sample salient points.
The proposed method relies on a graph neural network architecture trained to select an arbitrary, user-defined, number of points from the input space and to re-arrange their positions so as to minimize the visual perception error.
arXiv Detail & Related papers (2021-09-30T10:23:55Z) - It's always personal: Using Early Exits for Efficient On-Device CNN
Personalisation [19.046126301352274]
On-device machine learning is becoming a reality thanks to the availability of powerful hardware and model compression techniques.
In this work, we observe that a much smaller, personalised model can be employed to fit a specific scenario.
We introduce PershonEPEE, a framework that attaches early exits on the model and personalises them on-device.
arXiv Detail & Related papers (2021-02-02T09:10:17Z) - A Data-Efficient Deep Learning Based Smartphone Application For
Detection Of Pulmonary Diseases Using Chest X-rays [0.0]
The app inputs Chest X-Ray images captured from the mobile camera which is then relayed to the AI architecture in a cloud platform.
Doctors with a smartphone can leverage the application to save the considerable time that standard COVID-19 tests take for preliminary diagnosis.
arXiv Detail & Related papers (2020-08-19T04:28:17Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z) - Towards Efficient Scheduling of Federated Mobile Devices under
Computational and Statistical Heterogeneity [16.069182241512266]
This paper studies the implementation of distributed learning on mobile devices.
We use data as a tuning knob and propose two efficient-time algorithms to schedule different workloads.
Compared with the common benchmarks, the proposed algorithms achieve 2-100x speedup-wise, 2-7% accuracy gain and convergence rate by more than 100% on CIFAR10.
arXiv Detail & Related papers (2020-05-25T18:21:51Z) - A Data and Compute Efficient Design for Limited-Resources Deep Learning [68.55415606184]
equivariant neural networks have gained increased interest in the deep learning community.
They have been successfully applied in the medical domain where symmetries in the data can be effectively exploited to build more accurate and robust models.
Mobile, on-device implementations of deep learning solutions have been developed for medical applications.
However, equivariant models are commonly implemented using large and computationally expensive architectures, not suitable to run on mobile devices.
In this work, we design and test an equivariant version of MobileNetV2 and further optimize it with model quantization to enable more efficient inference.
arXiv Detail & Related papers (2020-04-21T00:49:11Z) - Runtime Deep Model Multiplexing for Reduced Latency and Energy
Consumption Inference [6.896677899938492]
We propose a learning algorithm to design a light-weight neural multiplexer that calls the model that will consume the minimum compute resources for a successful inference.
Mobile devices can use the proposed algorithm to offload the hard inputs to the cloud while inferring the easy ones locally.
arXiv Detail & Related papers (2020-01-14T23:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.