Semi-supervised on-device neural network adaptation for remote and
portable laser-induced breakdown spectroscopy
- URL: http://arxiv.org/abs/2104.03439v1
- Date: Thu, 8 Apr 2021 00:20:36 GMT
- Title: Semi-supervised on-device neural network adaptation for remote and
portable laser-induced breakdown spectroscopy
- Authors: Kshitij Bhardwaj and Maya Gokhale
- Abstract summary: We introduce a lightweight multi-layer perceptron (MLP) model for LIBS that can be adapted on-device without requiring labels for new input data.
It shows 89.3% average accuracy during data streaming, and up to 2.1% better accuracy compared to an model that does not support adaptation.
- Score: 0.22843885788439797
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Laser-induced breakdown spectroscopy (LIBS) is a popular, fast elemental
analysis technique used to determine the chemical composition of target
samples, such as in industrial analysis of metals or in space exploration.
Recently, there has been a rise in the use of machine learning (ML) techniques
for LIBS data processing. However, ML for LIBS is challenging as: (i) the
predictive models must be lightweight since they need to be deployed in highly
resource-constrained and battery-operated portable LIBS systems; and (ii) since
these systems can be remote, the models must be able to self-adapt to any
domain shift in input distributions which could be due to the lack of different
types of inputs in training data or dynamic environmental/sensor noise. This
on-device retraining of model should not only be fast but also unsupervised due
to the absence of new labeled data in remote LIBS systems. We introduce a
lightweight multi-layer perceptron (MLP) model for LIBS that can be adapted
on-device without requiring labels for new input data. It shows 89.3% average
accuracy during data streaming, and up to 2.1% better accuracy compared to an
MLP model that does not support adaptation. Finally, we also characterize the
inference and retraining performance of our model on Google Pixel2 phone.
Related papers
- Scaling Laws for Predicting Downstream Performance in LLMs [75.28559015477137]
This work focuses on the pre-training loss as a more-efficient metric for performance estimation.
We extend the power law analytical function to predict domain-specific pre-training loss based on FLOPs across data sources.
We employ a two-layer neural network to model the non-linear relationship between multiple domain-specific loss and downstream performance.
arXiv Detail & Related papers (2024-10-11T04:57:48Z) - Optimization of Lightweight Malware Detection Models For AIoT Devices [2.4947404267499587]
Malware intrusion is a problem for Internet of Things (IoT) and Artificial Intelligence of Things (AIoT) devices.
This research aims to optimize the proposed super learner meta-learning ensemble model to make it viable for low-end AIoT devices.
arXiv Detail & Related papers (2024-04-06T09:30:38Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Efficient Model Adaptation for Continual Learning at the Edge [15.334881190102895]
Most machine learning (ML) systems assume stationary and matching data distributions during training and deployment.
Data distributions often shift over time due to changes in environmental factors, sensor characteristics, and task-of-interest.
This paper presents theAdaptor-Reconfigurator (EAR) framework for efficient continual learning under domain shifts.
arXiv Detail & Related papers (2023-08-03T23:55:17Z) - Closing the loop: Autonomous experiments enabled by
machine-learning-based online data analysis in synchrotron beamline
environments [80.49514665620008]
Machine learning can be used to enhance research involving large or rapidly generated datasets.
In this study, we describe the incorporation of ML into a closed-loop workflow for X-ray reflectometry (XRR)
We present solutions that provide an elementary data analysis in real time during the experiment without introducing the additional software dependencies in the beamline control software environment.
arXiv Detail & Related papers (2023-06-20T21:21:19Z) - Convolutional Neural Networks for the classification of glitches in
gravitational-wave data streams [52.77024349608834]
We classify transient noise signals (i.e.glitches) and gravitational waves in data from the Advanced LIGO detectors.
We use models with a supervised learning approach, both trained from scratch using the Gravity Spy dataset.
We also explore a self-supervised approach, pre-training models with automatically generated pseudo-labels.
arXiv Detail & Related papers (2023-03-24T11:12:37Z) - LEAPER: Modeling Cloud FPGA-based Systems via Transfer Learning [13.565689665335697]
We propose LEAPER, a transfer learning-based approach for FPGA-based systems that adapts an existing ML-based model to a new, unknown environment.
Results show that our approach delivers, on average, 85% accuracy when we use our transferred model for prediction in a cloud environment with 5-shot learning.
arXiv Detail & Related papers (2022-08-22T21:25:56Z) - Federated Split GANs [12.007429155505767]
We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
arXiv Detail & Related papers (2022-07-04T23:53:47Z) - Modelling of Received Signals in Molecular Communication Systems based
machine learning: Comparison of azure machine learning and Python tools [0.0]
This paper applies Azure Machine Learning ( Azure ML) for flexible pavement maintenance regressions problems and solutions.
For prediction, four parameters are used as inputs: the receiver radius, transmitter radius, distance between receiver and transmitter, and diffusion coefficient.
In the established Azure ML, the regression algorithms such as, boost decision tree regression, Bayesian linear regression, neural network, and decision forest regression are selected.
arXiv Detail & Related papers (2021-12-19T18:15:17Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Transfer Learning without Knowing: Reprogramming Black-box Machine
Learning Models with Scarce Data and Limited Resources [78.72922528736011]
We propose a novel approach, black-box adversarial reprogramming (BAR), that repurposes a well-trained black-box machine learning model.
Using zeroth order optimization and multi-label mapping techniques, BAR can reprogram a black-box ML model solely based on its input-output responses.
BAR outperforms state-of-the-art methods and yields comparable performance to the vanilla adversarial reprogramming method.
arXiv Detail & Related papers (2020-07-17T01:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.