HybridoNet-Adapt: A Domain-Adapted Framework for Accurate Lithium-Ion Battery RUL Prediction
- URL: http://arxiv.org/abs/2503.21392v2
- Date: Fri, 18 Apr 2025 13:22:18 GMT
- Title: HybridoNet-Adapt: A Domain-Adapted Framework for Accurate Lithium-Ion Battery RUL Prediction
- Authors: Khoa Tran, Bao Huynh, Tri Le, Lam Pham, Vy-Rin Nguyen, Hung-Cuong Trinh, Duong Tran Anh,
- Abstract summary: We propose a novel RUL prediction framework that incorporates a domain adaptation (DA) technique.<n>Our framework integrates a signal preprocessing pipeline including noise reduction, feature extraction, and normalization with a robust deep learning model.<n> Experimental results show that HybridoNet Adapt significantly outperforms traditional models.
- Score: 0.6306103927990603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate prediction of the Remaining Useful Life (RUL) in Lithium ion battery (LIB) health management systems is essential for ensuring operational reliability and safety. However, many existing methods assume that training and testing data follow the same distribution, limiting their ability to generalize to unseen target domains. To address this, we propose a novel RUL prediction framework that incorporates a domain adaptation (DA) technique. Our framework integrates a signal preprocessing pipeline including noise reduction, feature extraction, and normalization with a robust deep learning model called HybridoNet Adapt. The model features a combination of LSTM, Multihead Attention, and Neural ODE layers for feature extraction, followed by two predictor modules with trainable trade-off parameters. To improve generalization, we adopt a DA strategy inspired by Domain Adversarial Neural Networks (DANN), replacing adversarial loss with Maximum Mean Discrepancy (MMD) to learn domain-invariant features. Experimental results show that HybridoNet Adapt significantly outperforms traditional models such as XGBoost and Elastic Net, as well as deep learning baselines like Dual input DNN, demonstrating its potential for scalable and reliable battery health management (BHM).
Related papers
- BHViT: Binarized Hybrid Vision Transformer [53.38894971164072]
Model binarization has made significant progress in enabling real-time and energy-efficient computation for convolutional neural networks (CNN)
We propose BHViT, a binarization-friendly hybrid ViT architecture and its full binarization model with the guidance of three important observations.
Our proposed algorithm achieves SOTA performance among binary ViT methods.
arXiv Detail & Related papers (2025-03-04T08:35:01Z) - Generative Distribution Prediction: A Unified Approach to Multimodal Learning [4.3108820946281945]
We introduce Generative Distribution Prediction (GDP) to enhance predictive performance across structured and unstructured modalities.<n>GDP is model-agnostic, compatible with any high-fidelity generative model, and supports transfer learning for domain adaptation.<n>We empirically validate GDP on four supervised learning tasks-tabular data prediction, question answering, image captioning, and adaptive quantile regression-demonstrating its versatility and effectiveness across diverse domains.
arXiv Detail & Related papers (2025-02-10T22:30:35Z) - Neural Conformal Control for Time Series Forecasting [54.96087475179419]
We introduce a neural network conformal prediction method for time series that enhances adaptivity in non-stationary environments.<n>Our approach acts as a neural controller designed to achieve desired target coverage, leveraging auxiliary multi-view data with neural network encoders.<n>We empirically demonstrate significant improvements in coverage and probabilistic accuracy, and find that our method is the only one that combines good calibration with consistency in prediction intervals.
arXiv Detail & Related papers (2024-12-24T03:56:25Z) - What Has Been Overlooked in Contrastive Source-Free Domain Adaptation: Leveraging Source-Informed Latent Augmentation within Neighborhood Context [28.634315143647385]
Source-free domain adaptation (SFDA) involves adapting a model originally trained using a labeled dataset to perform effectively on an unlabeled dataset.<n>This adaptation is especially crucial when significant disparities in data distributions exist between the two domains.<n>We introduce a straightforward yet highly effective latent augmentation method tailored for contrastive SFDA.
arXiv Detail & Related papers (2024-12-18T20:09:46Z) - Unveiling the Superior Paradigm: A Comparative Study of Source-Free Domain Adaptation and Unsupervised Domain Adaptation [52.36436121884317]
We show that Source-Free Domain Adaptation (SFDA) generally outperforms Unsupervised Domain Adaptation (UDA) in real-world scenarios.
SFDA offers advantages in time efficiency, storage requirements, targeted learning objectives, reduced risk of negative transfer, and increased robustness against overfitting.
We propose a novel weight estimation method that effectively integrates available source data into multi-SFDA approaches.
arXiv Detail & Related papers (2024-11-24T13:49:29Z) - Distributional Refinement Network: Distributional Forecasting via Deep Learning [0.8142555609235358]
A key task in actuarial modelling involves modelling the distributional properties of losses.
We propose a Distributional Refinement Network (DRN), which combines an inherently interpretable baseline model with a flexible neural network.
DRN captures varying effects of features across all quantiles, improving predictive performance while maintaining adequate interpretability.
arXiv Detail & Related papers (2024-06-03T05:14:32Z) - Flexible Parallel Neural Network Architecture Model for Early Prediction
of Lithium Battery Life [0.8530934084017966]
The early prediction of battery life (EPBL) is vital for enhancing the efficiency and extending the lifespan of lithium batteries.
Traditional models with fixed architectures often encounter underfitting or overfitting issues due to the diverse data distributions in different EPBL tasks.
An interpretable deep learning model of flexible parallel neural network (FPNN) is proposed, which includes an InceptionBlock, a 3D convolutional neural network (CNN), a 2D CNN, and a dual-stream network.
The proposed model effectively extracts electrochemical features from video-like formatted data using the 3D CNN and achieves advanced multi-scale feature abstraction through
arXiv Detail & Related papers (2024-01-29T12:20:17Z) - Linear Combination of Exponential Moving Averages for Wireless Channel
Prediction [2.34863357088666]
In this work, prediction models based on the exponential moving average (EMA) are investigated in depth.
A new model that we called EMA linear combination (ELC) is introduced, explained, and evaluated experimentally.
arXiv Detail & Related papers (2023-12-13T07:44:05Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Non-Generative Energy Based Models [3.1447898427012473]
Energy-based models (EBM) have become increasingly popular within computer vision.
We propose a non-generative training approach, Non-Generative EBM (NG-EBM)
We show that our NG-EBM training strategy retains many of the benefits of EBM in calibration, out-of-distribution detection, and adversarial resistance.
arXiv Detail & Related papers (2023-04-03T18:47:37Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - RAIN: RegulArization on Input and Network for Black-Box Domain
Adaptation [80.03883315743715]
Source-free domain adaptation transits the source-trained model towards target domain without exposing the source data.
This paradigm is still at risk of data leakage due to adversarial attacks on the source model.
We propose a novel approach named RAIN (RegulArization on Input and Network) for Black-Box domain adaptation from both input-level and network-level regularization.
arXiv Detail & Related papers (2022-08-22T18:18:47Z) - LAMA-Net: Unsupervised Domain Adaptation via Latent Alignment and
Manifold Learning for RUL Prediction [0.0]
We propose textitLAMA-Net, an encoder-decoder based model (Transformer) with an induced bottleneck, Latent Alignment using Mean Maximum Discrepancy (MMD) and manifold learning.
The proposed method offers a promising approach to perform domain adaptation in RUL prediction.
arXiv Detail & Related papers (2022-08-17T16:28:20Z) - Enhanced physics-constrained deep neural networks for modeling vanadium
redox flow battery [62.997667081978825]
We propose an enhanced version of the physics-constrained deep neural network (PCDNN) approach to provide high-accuracy voltage predictions.
The ePCDNN can accurately capture the voltage response throughout the charge--discharge cycle, including the tail region of the voltage discharge curve.
arXiv Detail & Related papers (2022-03-03T19:56:24Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - A Variational Bayesian Approach to Learning Latent Variables for
Acoustic Knowledge Transfer [55.20627066525205]
We propose a variational Bayesian (VB) approach to learning distributions of latent variables in deep neural network (DNN) models.
Our proposed VB approach can obtain good improvements on target devices, and consistently outperforms 13 state-of-the-art knowledge transfer algorithms.
arXiv Detail & Related papers (2021-10-16T15:54:01Z) - Generalized multiscale feature extraction for remaining useful life
prediction of bearings with generative adversarial networks [4.988898367111902]
Bearing is a key component in industrial machinery and its failure may lead to unwanted downtime and economic loss.
It is necessary to predict the remaining useful life (RUL) of bearings.
We propose a novel generalized multiscale feature extraction method with generative adversarial networks.
arXiv Detail & Related papers (2021-09-26T07:11:55Z) - Exploring Gaussian mixture model framework for speaker adaptation of
deep neural network acoustic models [3.867363075280544]
We investigate the GMM-derived (GMMD) features for adaptation of deep neural network (DNN) acoustic models.
We explore fusion of the adapted GMMD features with conventional features, such as bottleneck and MFCC features, in two different neural network architectures.
arXiv Detail & Related papers (2020-03-15T18:56:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.