Quadruplet Deep Metric Learning Model for Imbalanced Time-series Fault
Diagnosis
- URL: http://arxiv.org/abs/2107.03786v1
- Date: Thu, 8 Jul 2021 11:56:41 GMT
- Title: Quadruplet Deep Metric Learning Model for Imbalanced Time-series Fault
Diagnosis
- Authors: Xingtai Gui, Jiyang Zhang
- Abstract summary: This paper analyzes how to improve the performance of imbalanced classification by adjusting the distance between classes and the distribution within a class.
A novel quadruplet data pair design considering imbalance class is proposed with reference to traditional deep metric learning.
The reasonable combination of quadruplet loss and softmax loss function can reduce the impact of imbalance.
- Score: 0.2538209532048866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent diagnosis method based on data-driven and deep learning is an
attractive and meaningful field in recent years. However, in practical
application scenarios, the imbalance of time-series fault is an urgent problem
to be solved. From the perspective of Bayesian probability, this paper analyzes
how to improve the performance of imbalanced classification by adjusting the
distance between classes and the distribution within a class and proposes a
time-series fault diagnosis model based on deep metric learning. As a core of
deep metric learning, a novel quadruplet data pair design considering imbalance
class is proposed with reference to traditional deep metric learning. Based on
such data pair, this paper proposes a quadruplet loss function which takes into
account the inter-class distance and the intra-class data distribution, and
pays special attention to imbalanced sample pairs. The reasonable combination
of quadruplet loss and softmax loss function can reduce the impact of
imbalance. Experiments on two open datasets are carried out to verify the
effectiveness and robustness of the model. Experimental results show that the
proposed method can effectively improve the performance of imbalanced
classification.
Related papers
- Investigating Group Distributionally Robust Optimization for Deep
Imbalanced Learning: A Case Study of Binary Tabular Data Classification [0.44040106718326594]
Group distributionally robust optimization (gDRO) is investigated in this study for imbalance learning.
Experimental findings in comparison with empirical risk minimization (ERM) and classical imbalance methods reveal impressive performance of gDRO.
arXiv Detail & Related papers (2023-03-04T21:20:58Z) - Prototype-Anchored Learning for Learning with Imperfect Annotations [83.7763875464011]
It is challenging to learn unbiased classification models from imperfectly annotated datasets.
We propose a prototype-anchored learning (PAL) method, which can be easily incorporated into various learning-based classification schemes.
We verify the effectiveness of PAL on class-imbalanced learning and noise-tolerant learning by extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-06-23T10:25:37Z) - Phased Progressive Learning with Coupling-Regulation-Imbalance Loss for
Imbalanced Classification [11.673344551762822]
Deep neural networks generally perform poorly with datasets that suffer from quantity imbalance and classification difficulty imbalance between different classes.
A phased progressive learning schedule was proposed for smoothly transferring the training emphasis from representation learning to upper classifier training.
Our code will be open source soon.
arXiv Detail & Related papers (2022-05-24T14:46:39Z) - Analyzing the Effects of Handling Data Imbalance on Learned Features
from Medical Images by Looking Into the Models [50.537859423741644]
Training a model on an imbalanced dataset can introduce unique challenges to the learning problem.
We look deeper into the internal units of neural networks to observe how handling data imbalance affects the learned features.
arXiv Detail & Related papers (2022-04-04T09:38:38Z) - Double Robust Representation Learning for Counterfactual Prediction [68.78210173955001]
We propose a novel scalable method to learn double-robust representations for counterfactual predictions.
We make robust and efficient counterfactual predictions for both individual and average treatment effects.
The algorithm shows competitive performance with the state-of-the-art on real world and synthetic data.
arXiv Detail & Related papers (2020-10-15T16:39:26Z) - Imbalanced Image Classification with Complement Cross Entropy [10.35173901214638]
We study the study of cross entropy which mostly ignores output scores on incorrect classes.
This work discovers that predicted probabilities on incorrect classes improves the prediction accuracy for imbalanced image classification.
The proposed loss makes the ground truth class overwhelm the other classes in terms of softmax probability.
arXiv Detail & Related papers (2020-09-04T13:46:24Z) - Mitigating Dataset Imbalance via Joint Generation and Classification [17.57577266707809]
Supervised deep learning methods are enjoying enormous success in many practical applications of computer vision.
The marked performance degradation to biases and imbalanced data questions the reliability of these methods.
We introduce a joint dataset repairment strategy by combining a neural network classifier with Generative Adversarial Networks (GAN)
We show that the combined training helps to improve the robustness of both the classifier and the GAN against severe class imbalance.
arXiv Detail & Related papers (2020-08-12T18:40:38Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Provable tradeoffs in adversarially robust classification [96.48180210364893]
We develop and leverage new tools, including recent breakthroughs from probability theory on robust isoperimetry.
Our results reveal fundamental tradeoffs between standard and robust accuracy that grow when data is imbalanced.
arXiv Detail & Related papers (2020-06-09T09:58:19Z) - Long-Tailed Recognition Using Class-Balanced Experts [128.73438243408393]
We propose an ensemble of class-balanced experts that combines the strength of diverse classifiers.
Our ensemble of class-balanced experts reaches results close to state-of-the-art and an extended ensemble establishes a new state-of-the-art on two benchmarks for long-tailed recognition.
arXiv Detail & Related papers (2020-04-07T20:57:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.