Few-Shot Learning for Industrial Time Series: A Comparative Analysis Using the Example of Screw-Fastening Process Monitoring
- URL: http://arxiv.org/abs/2506.13909v1
- Date: Mon, 16 Jun 2025 18:38:34 GMT
- Title: Few-Shot Learning for Industrial Time Series: A Comparative Analysis Using the Example of Screw-Fastening Process Monitoring
- Authors: Xinyuan Tu, Haocheng Zhang, Tao Chengxu, Zuyi Chen,
- Abstract summary: Few-shot learning has shown promise in vision but remains largely unexplored for emphindustrial time-series data.<n>We present a systematic FSL study on screw-fastening process monitoring, using a 2,300-sample multivariate torque dataset.<n>We introduce a textbflabel-aware episodic sampler that collapses multi-label sequences into multiple single-label tasks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-shot learning (FSL) has shown promise in vision but remains largely unexplored for \emph{industrial} time-series data, where annotating every new defect is prohibitively expensive. We present a systematic FSL study on screw-fastening process monitoring, using a 2\,300-sample multivariate torque dataset that covers 16 uni- and multi-factorial defect types. Beyond benchmarking, we introduce a \textbf{label-aware episodic sampler} that collapses multi-label sequences into multiple single-label tasks, keeping the output dimensionality fixed while preserving combinatorial label information. Two FSL paradigms are investigated: the metric-based \emph{Prototypical Network} and the gradient-based \emph{Model-Agnostic Meta-Learning} (MAML), each paired with three backbones: 1D CNN, InceptionTime and the 341 M-parameter transformer \emph{Moment}. On 10-shot, 3-way evaluation, the InceptionTime + Prototypical Network combination achieves a \textbf{0.944 weighted F1} in the multi-class regime and \textbf{0.935} in the multi-label regime, outperforming finetuned Moment by up to 5.3\% while requiring two orders of magnitude fewer parameters and training time. Across all backbones, metric learning consistently surpasses MAML, and our label-aware sampling yields an additional 1.7\% F1 over traditional class-based sampling. These findings challenge the assumption that large foundation models are always superior: when data are scarce, lightweight CNN architectures augmented with simple metric learning not only converge faster but also generalize better. We release code, data splits and pre-trained weights to foster reproducible research and to catalyze the adoption of FSL in high-value manufacturing inspection.
Related papers
- TSPulse: Dual Space Tiny Pre-Trained Models for Rapid Time-Series Analysis [12.034816114258803]
TSPulse is an ultra-compact time-series pre-trained model with only 1M parameters.<n>It performs strongly across classification, anomaly detection, imputation, and retrieval tasks.<n>Results are achieved with just 1M parameters, making TSPulse 10-100X smaller than existing pre-trained models.
arXiv Detail & Related papers (2025-05-19T12:18:53Z) - An Adaptive Plug-and-Play Network for Few-Shot Learning [12.023266104119289]
Few-shot learning requires a model to classify new samples after learning from only a few samples.
Deep networks and complex metrics tend to induce overfitting, making it difficult to further improve the performance.
We propose plug-and-play model-adaptive resizer (MAR) and adaptive similarity metric (ASM) without any other losses.
arXiv Detail & Related papers (2023-02-18T13:25:04Z) - Boosting Low-Data Instance Segmentation by Unsupervised Pre-training
with Saliency Prompt [103.58323875748427]
This work offers a novel unsupervised pre-training solution for low-data regimes.
Inspired by the recent success of the Prompting technique, we introduce a new pre-training method that boosts QEIS models.
Experimental results show that our method significantly boosts several QEIS models on three datasets.
arXiv Detail & Related papers (2023-02-02T15:49:03Z) - Pushing the Limits of Simple Pipelines for Few-Shot Learning: External
Data and Fine-Tuning Make a Difference [74.80730361332711]
Few-shot learning is an important and topical problem in computer vision.
We show that a simple transformer-based pipeline yields surprisingly good performance on standard benchmarks.
arXiv Detail & Related papers (2022-04-15T02:55:58Z) - Adaptive Memory Networks with Self-supervised Learning for Unsupervised
Anomaly Detection [54.76993389109327]
Unsupervised anomaly detection aims to build models to detect unseen anomalies by only training on the normal data.
We propose a novel approach called Adaptive Memory Network with Self-supervised Learning (AMSL) to address these challenges.
AMSL incorporates a self-supervised learning module to learn general normal patterns and an adaptive memory fusion module to learn rich feature representations.
arXiv Detail & Related papers (2022-01-03T03:40:21Z) - Few-shot Learning via Dependency Maximization and Instance Discriminant
Analysis [21.8311401851523]
We study the few-shot learning problem, where a model learns to recognize new objects with extremely few labeled data per category.
We propose a simple approach to exploit unlabeled data accompanying the few-shot task for improving few-shot performance.
arXiv Detail & Related papers (2021-09-07T02:19:01Z) - Meta-Generating Deep Attentive Metric for Few-shot Classification [53.07108067253006]
We present a novel deep metric meta-generation method to generate a specific metric for a new few-shot learning task.
In this study, we structure the metric using a three-layer deep attentive network that is flexible enough to produce a discriminative metric for each task.
We gain surprisingly obvious performance improvement over state-of-the-art competitors, especially in the challenging cases.
arXiv Detail & Related papers (2020-12-03T02:07:43Z) - Understanding Self-supervised Learning with Dual Deep Networks [74.92916579635336]
We propose a novel framework to understand contrastive self-supervised learning (SSL) methods that employ dual pairs of deep ReLU networks.
We prove that in each SGD update of SimCLR with various loss functions, the weights at each layer are updated by a emphcovariance operator.
To further study what role the covariance operator plays and which features are learned in such a process, we model data generation and augmentation processes through a emphhierarchical latent tree model (HLTM)
arXiv Detail & Related papers (2020-10-01T17:51:49Z) - MetricUNet: Synergistic Image- and Voxel-Level Learning for Precise CT
Prostate Segmentation via Online Sampling [66.01558025094333]
We propose a two-stage framework, with the first stage to quickly localize the prostate region and the second stage to precisely segment the prostate.
We introduce a novel online metric learning module through voxel-wise sampling in the multi-task network.
Our method can effectively learn more representative voxel-level features compared with the conventional learning methods with cross-entropy or Dice loss.
arXiv Detail & Related papers (2020-05-15T10:37:02Z) - Adversarial Encoder-Multi-Task-Decoder for Multi-Stage Processes [5.933303832684138]
In multi-stage processes, decisions occur in an ordered sequence of stages.
We introduce a framework that combines adversarial autoencoders (AAE), multi-task learning (MTL), and multi-label semi-supervised learning (MLSSL)
Using real-world data from different domains, we show that our approach outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-03-15T19:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.