Learn to Predict Vertical Track Irregularity with Extremely Imbalanced
Data
- URL: http://arxiv.org/abs/2012.03062v2
- Date: Sun, 9 May 2021 02:21:57 GMT
- Title: Learn to Predict Vertical Track Irregularity with Extremely Imbalanced
Data
- Authors: Yutao Chen, Yu Zhang, Fei Yang
- Abstract summary: We showcase an application framework for predicting vertical track irregularity, based on a real-world, large-scale dataset produced by several operating railways in China.
We also proposed a novel approach for handling imbalanced data in time series prediction tasks with adaptive data sampling and penalized loss.
- Score: 6.448383767373112
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Railway systems require regular manual maintenance, a large part of which is
dedicated to inspecting track deformation. Such deformation might severely
impact trains' runtime security, whereas such inspections remain costly for
both finance and human resources. Therefore, a more precise and efficient
approach to detect railway track deformation is in urgent need. In this paper,
we showcase an application framework for predicting vertical track
irregularity, based on a real-world, large-scale dataset produced by several
operating railways in China. We have conducted extensive experiments on various
machine learning & ensemble learning algorithms in an effort to maximize the
model's capability in capturing any irregularity. We also proposed a novel
approach for handling imbalanced data in multivariate time series prediction
tasks with adaptive data sampling and penalized loss. Such an approach has
proven to reduce models' sensitivity to the imbalanced target domain, thus
improving its performance in predicting rare extreme values.
Related papers
- TAB: Text-Align Anomaly Backbone Model for Industrial Inspection Tasks [12.660226544498023]
We propose a novel framework to adeptly train a backbone model tailored to the manufacturing domain.
Our approach concurrently considers visual and text-aligned embedding spaces for normal and abnormal conditions.
The resulting pre-trained backbone markedly enhances performance in industrial downstream tasks.
arXiv Detail & Related papers (2023-12-15T01:37:29Z) - FaultFormer: Pretraining Transformers for Adaptable Bearing Fault Classification [7.136205674624813]
We present a novel self-supervised pretraining and fine-tuning framework based on transformer models.
In particular, we investigate different tokenization and data augmentation strategies to reach state-of-the-art accuracies.
This introduces a new paradigm where models can be pretrained on unlabeled data from different bearings, faults, and machinery and quickly deployed to new, data-scarce applications.
arXiv Detail & Related papers (2023-12-04T22:51:02Z) - DTC: Deep Tracking Control [16.2850135844455]
We propose a hybrid control architecture that combines the advantages of both worlds to achieve greater robustness, foot-placement accuracy, and terrain generalization.
A deep neural network policy is trained in simulation, aiming to track the optimized footholds.
We demonstrate superior robustness in the presence of slippery or deformable ground when compared to model-based counterparts.
arXiv Detail & Related papers (2023-09-27T07:57:37Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Towards Accelerated Model Training via Bayesian Data Selection [45.62338106716745]
We propose a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
Recent work has proposed a more reasonable data selection principle by examining the data's impact on the model's generalization loss.
This work solves these problems by leveraging a lightweight Bayesian treatment and incorporating off-the-shelf zero-shot predictors built on large-scale pre-trained models.
arXiv Detail & Related papers (2023-08-21T07:58:15Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Adapting to Continuous Covariate Shift via Online Density Ratio Estimation [64.8027122329609]
Dealing with distribution shifts is one of the central challenges for modern machine learning.
We propose an online method that can appropriately reuse historical information.
Our density ratio estimation method is proven to perform well by enjoying a dynamic regret bound.
arXiv Detail & Related papers (2023-02-06T04:03:33Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - CMW-Net: Learning a Class-Aware Sample Weighting Mapping for Robust Deep
Learning [55.733193075728096]
Modern deep neural networks can easily overfit to biased training data containing corrupted labels or class imbalance.
Sample re-weighting methods are popularly used to alleviate this data bias issue.
We propose a meta-model capable of adaptively learning an explicit weighting scheme directly from data.
arXiv Detail & Related papers (2022-02-11T13:49:51Z) - Output-weighted and relative entropy loss functions for deep learning
precursors of extreme events [0.0]
We propose a novel loss function, the adjusted output weighted loss, and extend the applicability of relative entropy based loss functions to systems with low dimensional output.
The proposed functions are tested using several cases of dynamical systems exhibiting extreme events and shown to significantly improve accuracy in predictions of extreme events.
arXiv Detail & Related papers (2021-12-01T21:05:54Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.