Comparing AutoML and Deep Learning Methods for Condition Monitoring
using Realistic Validation Scenarios
- URL: http://arxiv.org/abs/2308.14632v1
- Date: Mon, 28 Aug 2023 14:57:29 GMT
- Title: Comparing AutoML and Deep Learning Methods for Condition Monitoring
using Realistic Validation Scenarios
- Authors: Payman Goodarzi, Andreas Sch\"utze, Tizian Schneider
- Abstract summary: This study extensively compares conventional machine learning methods and deep learning for condition monitoring tasks using an AutoML toolbox.
Experiments reveal consistent high accuracy in random K-fold cross-validation scenarios across all tested models.
No clear winner emerges, indicating the presence of domain shift in real-world scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study extensively compares conventional machine learning methods and
deep learning for condition monitoring tasks using an AutoML toolbox. The
experiments reveal consistent high accuracy in random K-fold cross-validation
scenarios across all tested models. However, when employing leave-one-group-out
(LOGO) cross-validation on the same datasets, no clear winner emerges,
indicating the presence of domain shift in real-world scenarios. Additionally,
the study assesses the scalability and interpretability of conventional methods
and neural networks. Conventional methods offer explainability with their
modular structure aiding feature identification. In contrast, neural networks
require specialized interpretation techniques like occlusion maps to visualize
important regions in the input data. Finally, the paper highlights the
significance of feature selection, particularly in condition monitoring tasks
with limited class variations. Low-complexity models prove sufficient for such
tasks, as only a few features from the input signal are typically needed. In
summary, these findings offer crucial insights into the strengths and
limitations of various approaches, providing valuable benchmarks and
identifying the most suitable methods for condition monitoring applications,
thereby enhancing their applicability in real-world scenarios.
Related papers
- Training-free Anomaly Event Detection via LLM-guided Symbolic Pattern Discovery [70.75963253876628]
Anomaly event detection plays a crucial role in various real-world applications.
We present a training-free framework that integrates open-set object detection with symbolic regression.
arXiv Detail & Related papers (2025-02-09T10:30:54Z) - Unsupervised Model Diagnosis [49.36194740479798]
This paper proposes Unsupervised Model Diagnosis (UMO) to produce semantic counterfactual explanations without any user guidance.
Our approach identifies and visualizes changes in semantics, and then matches these changes to attributes from wide-ranging text sources.
arXiv Detail & Related papers (2024-10-08T17:59:03Z) - Learning Prompt-Enhanced Context Features for Weakly-Supervised Video
Anomaly Detection [37.99031842449251]
Video anomaly detection under weak supervision presents significant challenges.
We present a weakly supervised anomaly detection framework that focuses on efficient context modeling and enhanced semantic discriminability.
Our approach significantly improves the detection accuracy of certain anomaly sub-classes, underscoring its practical value and efficacy.
arXiv Detail & Related papers (2023-06-26T06:45:16Z) - Evaluating the Label Efficiency of Contrastive Self-Supervised Learning
for Multi-Resolution Satellite Imagery [0.0]
Self-supervised learning has been applied in the remote sensing domain to exploit readily-available unlabeled data.
In this paper, we study self-supervised visual representation learning through the lens of label efficiency.
arXiv Detail & Related papers (2022-10-13T06:54:13Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Hybridization of Capsule and LSTM Networks for unsupervised anomaly
detection on multivariate data [0.0]
This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network.
The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data.
arXiv Detail & Related papers (2022-02-11T10:33:53Z) - Learning a Domain-Agnostic Visual Representation for Autonomous Driving
via Contrastive Loss [25.798361683744684]
Domain-Agnostic Contrastive Learning (DACL) is a two-stage unsupervised domain adaptation framework with cyclic adversarial training and contrastive loss.
Our proposed approach achieves better performance in the monocular depth estimation task compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-10T07:06:03Z) - Self-supervised driven consistency training for annotation efficient
histopathology image analysis [13.005873872821066]
Training a neural network with a large labeled dataset is still a dominant paradigm in computational histopathology.
We propose a self-supervised pretext task that harnesses the underlying multi-resolution contextual cues in histology whole-slide images to learn a powerful supervisory signal for unsupervised representation learning.
We also propose a new teacher-student semi-supervised consistency paradigm that learns to effectively transfer the pretrained representations to downstream tasks based on prediction consistency with the task-specific un-labeled data.
arXiv Detail & Related papers (2021-02-07T19:46:21Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z) - Learning What Makes a Difference from Counterfactual Examples and
Gradient Supervision [57.14468881854616]
We propose an auxiliary training objective that improves the generalization capabilities of neural networks.
We use pairs of minimally-different examples with different labels, a.k.a counterfactual or contrasting examples, which provide a signal indicative of the underlying causal structure of the task.
Models trained with this technique demonstrate improved performance on out-of-distribution test sets.
arXiv Detail & Related papers (2020-04-20T02:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.