Toward Decision-Oriented Prognostics: An Integrated Estimate-Optimize Framework for Predictive Maintenance
- URL: http://arxiv.org/abs/2506.19698v1
- Date: Tue, 24 Jun 2025 15:10:15 GMT
- Title: Toward Decision-Oriented Prognostics: An Integrated Estimate-Optimize Framework for Predictive Maintenance
- Authors: Zhuojun Xie, Adam Abdin, Yiping Fang,
- Abstract summary: This paper proposes a PdM framework in which sensor-driven prognostics inform decision-making under economic trade-offs.<n>We first demonstrate that in the traditional estimate-then-optimize (ETO) framework, errors in probabilistic prediction can result in inconsistent and suboptimal maintenance decisions.<n>We propose an integrated estimate-optimize (IEO) framework that jointly tunes predictive models while directly optimizing for maintenance outcomes.
- Score: 1.904851064759821
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent research increasingly integrates machine learning (ML) into predictive maintenance (PdM) to reduce operational and maintenance costs in data-rich operational settings. However, uncertainty due to model misspecification continues to limit widespread industrial adoption. This paper proposes a PdM framework in which sensor-driven prognostics inform decision-making under economic trade-offs within a finite decision space. We investigate two key questions: (1) Does higher predictive accuracy necessarily lead to better maintenance decisions? (2) If not, how can the impact of prediction errors on downstream maintenance decisions be mitigated? We first demonstrate that in the traditional estimate-then-optimize (ETO) framework, errors in probabilistic prediction can result in inconsistent and suboptimal maintenance decisions. To address this, we propose an integrated estimate-optimize (IEO) framework that jointly tunes predictive models while directly optimizing for maintenance outcomes. We establish theoretical finite-sample guarantees on decision consistency under standard assumptions. Specifically, we develop a stochastic perturbation gradient descent algorithm suitable for small run-to-failure datasets. Empirical evaluations on a turbofan maintenance case study show that the IEO framework reduces average maintenance regret up to 22% compared to ETO. This study provides a principled approach to managing prediction errors in data-driven PdM. By aligning prognostic model training with maintenance objectives, the IEO framework improves robustness under model misspecification and improves decision quality. The improvement is particularly pronounced when the decision-making policy is misaligned with the decision-maker's target. These findings support more reliable maintenance planning in uncertain operational environments.
Related papers
- Treatment Effect Estimation for Optimal Decision-Making [65.30942348196443]
We study optimal decision-making based on two-stage CATE estimators.<n>We propose a novel two-stage learning objective that retargets the CATE to balance CATE estimation error and decision performance.
arXiv Detail & Related papers (2025-05-19T13:24:57Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.<n>We propose methods tailored to the unique properties of perception and decision-making.<n>We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Uncertainty-Penalized Direct Preference Optimization [52.387088396044206]
We develop a pessimistic framework for DPO by introducing preference uncertainty penalization schemes.
The penalization serves as a correction to the loss which attenuates the loss gradient for uncertain samples.
We show improved overall performance compared to vanilla DPO, as well as better completions on prompts from high-uncertainty chosen/rejected responses.
arXiv Detail & Related papers (2024-10-26T14:24:37Z) - End-to-End Conformal Calibration for Optimization Under Uncertainty [32.844953018302874]
This paper develops an end-to-end framework to learn the uncertainty estimates for conditional optimization.
In addition, we propose to represent arbitrary convex uncertainty sets with partially convex neural networks.
Our approach consistently improves upon two-stage-then-optimize.
arXiv Detail & Related papers (2024-09-30T17:38:27Z) - Anatomy of Machines for Markowitz: Decision-Focused Learning for Mean-Variance Portfolio Optimization [27.791742749950203]
Decision-Focused Learning can integrate prediction and optimization to improve decision-making outcomes.
MSE treats the errors of all assets equally, but how does DFL reduce errors of different assets differently?
This study aims to investigate how DFL adjusts stock return prediction models to optimize decisions in MVO.
arXiv Detail & Related papers (2024-09-15T10:37:11Z) - Training, Architecture, and Prior for Deterministic Uncertainty Methods [33.45069308137142]
This work investigates important design choices in Deterministic Uncertainty Methods (DUMs)
We show that training schemes decoupling the core architecture and the uncertainty head schemes can significantly improve uncertainty performances.
Contrary to other Bayesian models, we show that the prior defined by DUMs do not have a strong effect on the final performances.
arXiv Detail & Related papers (2023-03-10T09:00:52Z) - Online Learning and Optimization for Queues with Unknown Demand Curve
and Service Distribution [26.720986177499338]
We investigate an optimization problem in a queueing system where the service provider selects the optimal service fee p and service capacity mu.
We develop an online learning framework that automatically incorporates the parameter estimation errors in the solution prescription process.
arXiv Detail & Related papers (2023-03-06T08:47:40Z) - When Demonstrations Meet Generative World Models: A Maximum Likelihood
Framework for Offline Inverse Reinforcement Learning [62.00672284480755]
This paper aims to recover the structure of rewards and environment dynamics that underlie observed actions in a fixed, finite set of demonstrations from an expert agent.
Accurate models of expertise in executing a task has applications in safety-sensitive applications such as clinical decision making and autonomous driving.
arXiv Detail & Related papers (2023-02-15T04:14:20Z) - Bridging POMDPs and Bayesian decision making for robust maintenance
planning under model uncertainty: An application to railway systems [0.7046417074932257]
We present a framework to estimate POMDP transition and observation model parameters directly from available data.
We then form and solve the POMDP problem by exploiting the inferred distributions.
We successfully apply our approach on maintenance planning for railway track assets.
arXiv Detail & Related papers (2022-12-15T16:09:47Z) - Uncertainty-aware Remaining Useful Life predictor [57.74855412811814]
Remaining Useful Life (RUL) estimation is the problem of inferring how long a certain industrial asset can be expected to operate.
In this work, we consider Deep Gaussian Processes (DGPs) as possible solutions to the aforementioned limitations.
The performance of the algorithms is evaluated on the N-CMAPSS dataset from NASA for aircraft engines.
arXiv Detail & Related papers (2021-04-08T08:50:44Z) - Right Decisions from Wrong Predictions: A Mechanism Design Alternative
to Individual Calibration [107.15813002403905]
Decision makers often need to rely on imperfect probabilistic forecasts.
We propose a compensation mechanism ensuring that the forecasted utility matches the actually accrued utility.
We demonstrate an application showing how passengers could confidently optimize individual travel plans based on flight delay probabilities.
arXiv Detail & Related papers (2020-11-15T08:22:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.