Multidimensional analysis using sensor arrays with deep learning for
high-precision and high-accuracy diagnosis
- URL: http://arxiv.org/abs/2211.17139v1
- Date: Wed, 30 Nov 2022 16:14:55 GMT
- Title: Multidimensional analysis using sensor arrays with deep learning for
high-precision and high-accuracy diagnosis
- Authors: Julie Payette, Sylvain G.Cloutier and Fabrice Vaussenat
- Abstract summary: We demonstrate that it becomes possible to significantly improve the measurements' precision and accuracy by feeding a deep neural network (DNN) with the data from a low-cost and low-accuracy sensor array.
The data collection is done with an array composed of 32 temperature sensors, including 16 analog and 16 digital sensors.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the upcoming years, artificial intelligence (AI) is going to transform the
practice of medicine in most of its specialties. Deep learning can help achieve
better and earlier problem detection, while reducing errors on diagnosis. By
feeding a deep neural network (DNN) with the data from a low-cost and
low-accuracy sensor array, we demonstrate that it becomes possible to
significantly improve the measurements' precision and accuracy. The data
collection is done with an array composed of 32 temperature sensors, including
16 analog and 16 digital sensors. All sensors have accuracies between
0.5-2.0$^\circ$C. 800 vectors are extracted, covering a range from to 30 to
45$^\circ$C. In order to improve the temperature readings, we use machine
learning to perform a linear regression analysis through a DNN. In an attempt
to minimize the model's complexity in order to eventually run inferences
locally, the network with the best results involves only three layers using the
hyperbolic tangent activation function and the Adam Stochastic Gradient Descent
(SGD) optimizer. The model is trained with a randomly-selected dataset using
640 vectors (80% of the data) and tested with 160 vectors (20%). Using the mean
squared error as a loss function between the data and the model's prediction,
we achieve a loss of only 1.47x10$^{-4}$ on the training set and 1.22x10$^{-4}$
on the test set. As such, we believe this appealing approach offers a new
pathway towards significantly better datasets using readily-available ultra
low-cost sensors.
Related papers
- An Investigation on Machine Learning Predictive Accuracy Improvement and Uncertainty Reduction using VAE-based Data Augmentation [2.517043342442487]
Deep generative learning uses certain ML models to learn the underlying distribution of existing data and generate synthetic samples that resemble the real data.
In this study, our objective is to evaluate the effectiveness of data augmentation using variational autoencoder (VAE)-based deep generative models.
We investigated whether the data augmentation leads to improved accuracy in the predictions of a deep neural network (DNN) model trained using the augmented data.
arXiv Detail & Related papers (2024-10-24T18:15:48Z) - Rethinking Deep Learning: Propagating Information in Neural Networks without Backpropagation and Statistical Optimization [0.0]
This study discusses the information propagation capabilities and potential practical applications of NNs as neural system mimicking structures.
In this study, the NNs architecture comprises fully connected layers using step functions as activation functions, with 0-15 hidden layers, and no weight updates.
The accuracy is calculated by comparing the average output vectors of the training data for each label with the output vectors of the test data, based on vector similarity.
arXiv Detail & Related papers (2024-08-18T09:22:24Z) - Just How Flexible are Neural Networks in Practice? [89.80474583606242]
It is widely believed that a neural network can fit a training set containing at least as many samples as it has parameters.
In practice, however, we only find solutions via our training procedure, including the gradient and regularizers, limiting flexibility.
arXiv Detail & Related papers (2024-06-17T12:24:45Z) - UncLe-SLAM: Uncertainty Learning for Dense Neural SLAM [60.575435353047304]
We present an uncertainty learning framework for dense neural simultaneous localization and mapping (SLAM)
We propose an online framework for sensor uncertainty estimation that can be trained in a self-supervised manner from only 2D input data.
arXiv Detail & Related papers (2023-06-19T16:26:25Z) - Machine Learning Force Fields with Data Cost Aware Training [94.78998399180519]
Machine learning force fields (MLFF) have been proposed to accelerate molecular dynamics (MD) simulation.
Even for the most data-efficient MLFFs, reaching chemical accuracy can require hundreds of frames of force and energy labels.
We propose a multi-stage computational framework -- ASTEROID, which lowers the data cost of MLFFs by leveraging a combination of cheap inaccurate data and expensive accurate data.
arXiv Detail & Related papers (2023-06-05T04:34:54Z) - Evolving Deep Convolutional Neural Network by Hybrid Sine-Cosine and
Extreme Learning Machine for Real-time COVID19 Diagnosis from X-Ray Images [0.5249805590164902]
Deep Convolutional Networks (CNNs) can be considered as applicable tools to diagnose COVID19 positive cases.
This paper proposes using the Extreme Learning Machine (ELM) instead of the last fully connected layer to address this deficiency.
The proposed approach outperforms comparative benchmarks with a final accuracy of 98.83% on the COVID-Xray-5k dataset.
arXiv Detail & Related papers (2021-05-14T19:40:16Z) - Deep learning for gravitational-wave data analysis: A resampling
white-box approach [62.997667081978825]
We apply Convolutional Neural Networks (CNNs) to detect gravitational wave (GW) signals of compact binary coalescences, using single-interferometer data from LIGO detectors.
CNNs were quite precise to detect noise but not sensitive enough to recall GW signals, meaning that CNNs are better for noise reduction than generation of GW triggers.
arXiv Detail & Related papers (2020-09-09T03:28:57Z) - AutoSimulate: (Quickly) Learning Synthetic Data Generation [70.82315853981838]
We propose an efficient alternative for optimal synthetic data generation based on a novel differentiable approximation of the objective.
We demonstrate that the proposed method finds the optimal data distribution faster (up to $50times$), with significantly reduced training data generation (up to $30times$) and better accuracy ($+8.7%$) on real-world test datasets than previous methods.
arXiv Detail & Related papers (2020-08-16T11:36:11Z) - Machine learning for complete intersection Calabi-Yau manifolds: a
methodological study [0.0]
We revisit the question of predicting Hodge numbers $h1,1$ and $h2,1$ of complete Calabi-Yau intersections using machine learning (ML)
We obtain 97% (resp. 99%) accuracy for $h1,1$ using a neural network inspired by the Inception model for the old dataset, using only 30% (resp. 70%) of the data for training.
For the new one, a simple linear regression leads to almost 100% accuracy with 30% of the data for training.
arXiv Detail & Related papers (2020-07-30T19:43:49Z) - RIFLE: Backpropagation in Depth for Deep Transfer Learning through
Re-Initializing the Fully-connected LayEr [60.07531696857743]
Fine-tuning the deep convolution neural network(CNN) using a pre-trained model helps transfer knowledge learned from larger datasets to the target task.
We propose RIFLE - a strategy that deepens backpropagation in transfer learning settings.
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning.
arXiv Detail & Related papers (2020-07-07T11:27:43Z) - Sparse Array Selection Across Arbitrary Sensor Geometries with Deep
Transfer Learning [22.51807198305316]
Sparse sensor array selection arises in many engineering applications, where it is imperative to obtain maximum spatial resolution from a limited number of array elements.
Recent research shows that computational complexity of array selection is reduced by replacing the conventional optimization and greedy search methods with a deep learning network.
We adopt a deep transfer learning (TL) approach, wherein we train a deep convolutional neural network (CNN) with data of a source sensor array for which calibrated data are readily available and reuse this pre-trained CNN for a different, data-insufficient target array geometry.
Numerical experiments with uniform rectangular and circular arrays demonstrate enhanced performance of TL
arXiv Detail & Related papers (2020-04-24T10:10:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.