Multi-unit soft sensing permits few-shot learning
- URL: http://arxiv.org/abs/2309.15828v2
- Date: Mon, 13 May 2024 08:50:14 GMT
- Title: Multi-unit soft sensing permits few-shot learning
- Authors: Bjarne Grimstad, Kristian Løvland, Lars S. Imsland,
- Abstract summary: A performance gain is generally attained when knowledge is transferred among strongly related soft sensor learning tasks.
A particularly relevant case for transferability is when developing soft sensors of the same type for similar, but physically different processes or units.
Applying methods that exploit transferability in this setting leads to what we call multi-unit soft sensing.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent literature has explored various ways to improve soft sensors by utilizing learning algorithms with transferability. A performance gain is generally attained when knowledge is transferred among strongly related soft sensor learning tasks. A particularly relevant case for transferability is when developing soft sensors of the same type for similar, but physically different processes or units. Then, the data from each unit presents a soft sensor learning task, and it is reasonable to expect strongly related tasks. Applying methods that exploit transferability in this setting leads to what we call multi-unit soft sensing. This paper formulates multi-unit soft sensing as a probabilistic, hierarchical model, which we implement using a deep neural network. The learning capabilities of the model are studied empirically on a large-scale industrial case by developing virtual flow meters (a type of soft sensor) for 80 petroleum wells. We investigate how the model generalizes with the number of wells/units. Interestingly, we demonstrate that multi-unit models learned from data from many wells, permit few-shot learning of virtual flow meters for new wells. Surprisingly, regarding the difficulty of the tasks, few-shot learning on 1-3 data points often leads to high performance on new wells.
Related papers
- A deep latent variable model for semi-supervised multi-unit soft sensing in industrial processes [0.0]
We introduce a deep latent variable model for semi-supervised multi-unit soft sensing.
This hierarchical, generative model is able to jointly model different units, as well as learning from both labeled and unlabeled data.
We show that by combining semi-supervised and multi-task learning, the proposed model achieves superior results.
arXiv Detail & Related papers (2024-07-18T09:13:22Z) - Machine Learning Based Compensation for Inconsistencies in Knitted Force
Sensors [1.0742675209112622]
Knitted sensors frequently suffer from inconsistencies due to innate effects such as offset, relaxation, and drift.
In this paper, we demonstrate a method for counteracting this by applying processing using a minimal artificial neural network (ANN)
By training a three-layer ANN with a total of 8 neurons, we manage to significantly improve the mapping between sensor reading and actuation force.
arXiv Detail & Related papers (2023-06-21T09:19:33Z) - Online Active Learning for Soft Sensor Development using Semi-Supervised
Autoencoders [0.7734726150561089]
Data-driven soft sensors are extensively used in industrial and chemical processes to predict hard-to-measure process variables.
Active learning methods can be highly beneficial as they can suggest the most informative labels to query.
In this work, we adapt some of these approaches to the stream-based scenario and show how they can be used to select the most informative data points.
arXiv Detail & Related papers (2022-12-26T09:45:41Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - More to Less (M2L): Enhanced Health Recognition in the Wild with Reduced
Modality of Wearable Sensors [18.947172818861773]
Fusing multiple sensors is a common scenario in many applications, but may not always be feasible in real-world scenarios.
We propose an effective more to less (M2L) learning framework to improve testing performance with reduced sensors.
arXiv Detail & Related papers (2022-02-16T18:23:29Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Deep invariant networks with differentiable augmentation layers [87.22033101185201]
Methods for learning data augmentation policies require held-out data and are based on bilevel optimization problems.
We show that our approach is easier and faster to train than modern automatic data augmentation techniques.
arXiv Detail & Related papers (2022-02-04T14:12:31Z) - Representation Learning for Remote Sensing: An Unsupervised Sensor
Fusion Approach [0.0]
We propose Contrastive Sensor Fusion, which exploits coterminous data from multiple sources to learn useful representations of every possible combination of those sources.
Using a dataset of 47 million unlabeled coterminous image triplets, we train an encoder to produce meaningful representations from any possible combination of channels from the input sensors.
These representations outperform fully supervised ImageNet weights on a remote sensing classification task and improve as more sensors are fused.
arXiv Detail & Related papers (2021-08-11T08:32:58Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Laplacian Denoising Autoencoder [114.21219514831343]
We propose to learn data representations with a novel type of denoising autoencoder.
The noisy input data is generated by corrupting latent clean data in the gradient domain.
Experiments on several visual benchmarks demonstrate that better representations can be learned with the proposed approach.
arXiv Detail & Related papers (2020-03-30T16:52:39Z) - Deep Soft Procrustes for Markerless Volumetric Sensor Alignment [81.13055566952221]
In this work, we improve markerless data-driven correspondence estimation to achieve more robust multi-sensor spatial alignment.
We incorporate geometric constraints in an end-to-end manner into a typical segmentation based model and bridge the intermediate dense classification task with the targeted pose estimation one.
Our model is experimentally shown to achieve similar results with marker-based methods and outperform the markerless ones, while also being robust to the pose variations of the calibration structure.
arXiv Detail & Related papers (2020-03-23T10:51:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.