Bi-cLSTM: Residual-Corrected Bidirectional LSTM for Aero-Engine RUL Estimation
- URL: http://arxiv.org/abs/2603.00745v1
- Date: Sat, 28 Feb 2026 17:44:06 GMT
- Title: Bi-cLSTM: Residual-Corrected Bidirectional LSTM for Aero-Engine RUL Estimation
- Authors: Rafi Hassan Chowdhury, Nabil Daiyan, Faria Ahmed, Md Redwan Iqbal, Morsalin Sheikh,
- Abstract summary: We propose a Bidirectional Residual Corrected LSTM (Bi-cLSTM) model for robust RUL estimation.<n>The proposed architecture combines bidirectional temporal modeling with an adaptive residual correction mechanism to iteratively refine sequence representations.<n>Extensive experiments on all four subsets of the NASA C-MAPSS dataset demonstrate that the proposed Bi-cLSTM consistently outperforms LSTM-based baselines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate Remaining Useful Life (RUL) prediction is a key requirement for effective Prognostics and Health Management (PHM) in safety-critical systems such as aero-engines. Existing deep learning approaches, particularly LSTM-based models, often struggle to generalize across varying operating conditions and are sensitive to noise in multivariate sensor data. To address these challenges, we propose a novel Bidirectional Residual Corrected LSTM (Bi-cLSTM) model for robust RUL estimation. The proposed architecture combines bidirectional temporal modeling with an adaptive residual correction mechanism to iteratively refine sequence representations. In addition, we introduce a condition-aware preprocessing pipeline incorporating regime-based normalization, feature selection, and exponential smoothing to improve robustness under complex operating environments. Extensive experiments on all four subsets of the NASA C-MAPSS dataset demonstrate that the proposed Bi-cLSTM consistently outperforms LSTM-based baselines and achieves competitive state-of-the-art performance, particularly in challenging multi-condition scenarios. These results highlight the effectiveness of combining bidirectional temporal learning with residual correction for reliable RUL prediction.
Related papers
- Contextual and Seasonal LSTMs for Time Series Anomaly Detection [49.50689313712684]
We propose a novel prediction-based framework named Contextual and Seasonal LSTMs (CS-LSTMs)<n>CS-LSTMs are built upon a noise decomposition strategy and jointly leverage contextual dependencies and seasonal patterns.<n>They consistently outperform state-of-the-art methods, highlighting their effectiveness and practical value in robust time series anomaly detection.
arXiv Detail & Related papers (2026-02-10T11:46:15Z) - Beyond Wave Variables: A Data-Driven Ensemble Approach for Enhanced Teleoperation Transparency and Stability [2.9802157303754844]
This article presents a data-driven hybrid framework that replaces the conventional wave-variable transform with an ensemble of three advanced sequence models.<n>The results show that our ensemble achieves a transparency comparable to the baseline wave-variable system under varying delays and noise.
arXiv Detail & Related papers (2025-12-09T10:06:05Z) - Steering Vision-Language-Action Models as Anti-Exploration: A Test-Time Scaling Approach [78.4812458793128]
We propose textbfTACO, a test-time-scaling framework that applies a lightweight pseudo-count estimator as a high-fidelity verifier of action chunks.<n>Our method resembles the classical anti-exploration principle in offline reinforcement learning (RL), and being gradient-free, it incurs significant computational benefits.
arXiv Detail & Related papers (2025-12-02T14:42:54Z) - Leveraging Duration Pseudo-Embeddings in Multilevel LSTM and GCN Hypermodels for Outcome-Oriented PPM [4.120576565537633]
Existing deep learning models for Predictive Process Monitoring (PPM) struggle with temporal irregularities.<n>We propose a dual input neural network strategy that separates event and sequence attributes, using a duration-aware pseudo-embedding matrix.<n>Our results demonstrate the benefits of explicit temporal encoding and provide a flexible design for robust, real-world PPM applications.
arXiv Detail & Related papers (2025-11-24T07:06:08Z) - Grounded Test-Time Adaptation for LLM Agents [75.62784644919803]
Large language model (LLM)-based agents struggle to generalize to novel and complex environments.<n>We propose two strategies for adapting LLM agents by leveraging environment-specific information available during deployment.
arXiv Detail & Related papers (2025-11-06T22:24:35Z) - Flow Matching for Robust Simulation-Based Inference under Model Misspecification [11.172752919335394]
Flow Matching Corrected Posterior Estimation is a framework that refines simulation-trained posterior estimators using a small set of real calibration samples.<n>We show that our proposal consistently mitigates the effects of misspecification, delivering improved inference accuracy and uncertainty calibration compared to standard SBI baselines.
arXiv Detail & Related papers (2025-09-27T16:10:53Z) - Inductive Domain Transfer In Misspecified Simulation-Based Inference [29.26298096319145]
We propose a fully inductive and amortized SBI framework that integrates calibration and distributional alignment into a single, end-to-end trainable model.<n>Our approach matches or surpasses the performance of RoPE, as well as other standard SBI and non- SBI estimators.
arXiv Detail & Related papers (2025-08-21T14:06:42Z) - Taming Polysemanticity in LLMs: Provable Feature Recovery via Sparse Autoencoders [50.52694757593443]
Existing SAE training algorithms often lack rigorous mathematical guarantees and suffer from practical limitations.<n>We first propose a novel statistical framework for the feature recovery problem, which includes a new notion of feature identifiability.<n>We introduce a new SAE training algorithm based on bias adaptation'', a technique that adaptively adjusts neural network bias parameters to ensure appropriate activation sparsity.
arXiv Detail & Related papers (2025-06-16T20:58:05Z) - Stochastic Primal-Dual Double Block-Coordinate for Two-way Partial AUC Maximization [45.99743804547533]
Two-way partial AUCAUC is a critical performance metric for binary classification with imbalanced data.<n>Existing algorithms for TPAUC optimization remain under-explored.<n>We introduce two innovative double-coordinate block-coordinate algorithms for TPAUC optimization.
arXiv Detail & Related papers (2025-05-28T03:55:05Z) - BiT-MamSleep: Bidirectional Temporal Mamba for EEG Sleep Staging [9.917709200378217]
BiT-MamSleep is a novel architecture that integrates the Triple-Resolution CNN (TRCNN) for efficient multi-scale feature extraction.
BiT-MamSleep incorporates an Adaptive Feature Recalibration (AFR) module and a temporal enhancement block to dynamically refine feature importance.
Experiments on four public datasets demonstrate that BiT-MamSleep significantly outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-11-03T14:49:11Z) - Maximum Likelihood Learning of Unnormalized Models for Simulation-Based
Inference [44.281860162298564]
We introduce two synthetic likelihood methods for Simulation-Based Inference.
We learn a conditional energy-based model (EBM) of the likelihood using synthetic data generated by the simulator.
We demonstrate the properties of both methods on a range of synthetic datasets, and apply them to a model of the neuroscience network in the crab.
arXiv Detail & Related papers (2022-10-26T14:38:24Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Learning representations with end-to-end models for improved remaining
useful life prognostics [64.80885001058572]
The remaining Useful Life (RUL) of equipment is defined as the duration between the current time and its failure.
We propose an end-to-end deep learning model based on multi-layer perceptron and long short-term memory layers (LSTM) to predict the RUL.
We will discuss how the proposed end-to-end model is able to achieve such good results and compare it to other deep learning and state-of-the-art methods.
arXiv Detail & Related papers (2021-04-11T16:45:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.