Trust-Based Cloud Machine Learning Model Selection For Industrial IoT
and Smart City Services
- URL: http://arxiv.org/abs/2008.05042v1
- Date: Tue, 11 Aug 2020 23:58:03 GMT
- Title: Trust-Based Cloud Machine Learning Model Selection For Industrial IoT
and Smart City Services
- Authors: Basheer Qolomany, Ihab Mohammed, Ala Al-Fuqaha, Mohsen Guizan, Junaid
Qadir
- Abstract summary: We consider the paradigm where cloud service providers collect big data from resource-constrained devices for building Machine Learning prediction models.
Our proposed solution comprises an intelligent-time reconfiguration that maximizes the level of trust of ML models.
Our results show that the selected model's trust level is 0.7% to 2.53% less compared to the results obtained using ILP.
- Score: 5.333802479607541
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With Machine Learning (ML) services now used in a number of mission-critical
human-facing domains, ensuring the integrity and trustworthiness of ML models
becomes all-important. In this work, we consider the paradigm where cloud
service providers collect big data from resource-constrained devices for
building ML-based prediction models that are then sent back to be run locally
on the intermittently-connected resource-constrained devices. Our proposed
solution comprises an intelligent polynomial-time heuristic that maximizes the
level of trust of ML models by selecting and switching between a subset of the
ML models from a superset of models in order to maximize the trustworthiness
while respecting the given reconfiguration budget/rate and reducing the cloud
communication overhead. We evaluate the performance of our proposed heuristic
using two case studies. First, we consider Industrial IoT (IIoT) services, and
as a proxy for this setting, we use the turbofan engine degradation simulation
dataset to predict the remaining useful life of an engine. Our results in this
setting show that the trust level of the selected models is 0.49% to 3.17% less
compared to the results obtained using Integer Linear Programming (ILP).
Second, we consider Smart Cities services, and as a proxy of this setting, we
use an experimental transportation dataset to predict the number of cars. Our
results show that the selected model's trust level is 0.7% to 2.53% less
compared to the results obtained using ILP. We also show that our proposed
heuristic achieves an optimal competitive ratio in a polynomial-time
approximation scheme for the problem.
Related papers
- Customer Lifetime Value Prediction with Uncertainty Estimation Using Monte Carlo Dropout [3.187236205541292]
We propose a novel approach that enhances the architecture of purely neural network models by incorporating the Monte Carlo Dropout (MCD) framework.
We benchmarked the proposed method using data from one of the most downloaded mobile games in the world.
Our approach provides confidence metric as an extra dimension for performance evaluation across various neural network models.
arXiv Detail & Related papers (2024-11-24T18:14:44Z) - Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design [59.00758127310582]
We propose a novel framework Read-ME that transforms pre-trained dense LLMs into smaller MoE models.
Our approach employs activation sparsity to extract experts.
Read-ME outperforms other popular open-source dense models of similar scales.
arXiv Detail & Related papers (2024-10-24T19:48:51Z) - Dual-Model Distillation for Efficient Action Classification with Hybrid Edge-Cloud Solution [1.8029479474051309]
We design a hybrid edge-cloud solution that leverages the efficiency of smaller models for local processing while deferring to larger, more accurate cloud-based models when necessary.
Specifically, we propose a novel unsupervised data generation method, Dual-Model Distillation (DMD), to train a lightweight switcher model that can predict when the edge model's output is uncertain.
Experimental results on the action classification task show that our framework not only requires less computational overhead, but also improves accuracy compared to using a large model alone.
arXiv Detail & Related papers (2024-10-16T02:06:27Z) - Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models [52.98743860365194]
We propose a new fine-tuning method called Self-Play fIne-tuNing (SPIN)
At the heart of SPIN lies a self-play mechanism, where the LLM refines its capability by playing against instances of itself.
This sheds light on the promise of self-play, enabling the achievement of human-level performance in LLMs without the need for expert opponents.
arXiv Detail & Related papers (2024-01-02T18:53:13Z) - Predictive Maintenance Model Based on Anomaly Detection in Induction
Motors: A Machine Learning Approach Using Real-Time IoT Data [0.0]
In this work, we demonstrate a novel anomaly detection system on induction motors used in pumps, compressors, fans, and other industrial machines.
We use a combination of pre-processing techniques and machine learning (ML) models with a low computational cost.
arXiv Detail & Related papers (2023-10-15T18:43:45Z) - Tryage: Real-time, intelligent Routing of User Prompts to Large Language
Models [1.0878040851637998]
With over 200, 000 models in the Hugging Face ecosystem, users grapple with selecting and optimizing models to suit multifaceted and data domains.
Here, we propose a context-aware routing system, Tryage, that leverages a language model router for optimal selection of expert models from a model library.
arXiv Detail & Related papers (2023-08-22T17:48:24Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Federated Learning with Unreliable Clients: Performance Analysis and
Mechanism Design [76.29738151117583]
Federated Learning (FL) has become a promising tool for training effective machine learning models among distributed clients.
However, low quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training.
We model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk.
arXiv Detail & Related papers (2021-05-10T08:02:27Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Particle Swarm Optimized Federated Learning For Industrial IoT and Smart
City Services [9.693848515371268]
We propose a Particle Swarm Optimization (PSO)-based technique to optimize the hyper parameter settings for the local Machine Learning models.
We evaluate the performance of our proposed technique using two case studies.
arXiv Detail & Related papers (2020-09-05T16:20:47Z) - VAE-LIME: Deep Generative Model Based Approach for Local Data-Driven
Model Interpretability Applied to the Ironmaking Industry [70.10343492784465]
It is necessary to expose to the process engineer, not solely the model predictions, but also their interpretability.
Model-agnostic local interpretability solutions based on LIME have recently emerged to improve the original method.
We present in this paper a novel approach, VAE-LIME, for local interpretability of data-driven models forecasting the temperature of the hot metal produced by a blast furnace.
arXiv Detail & Related papers (2020-07-15T07:07:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.