Just Another Method to Compute MTTF from Continuous Time Markov Chain
- URL: http://arxiv.org/abs/2202.00674v2
- Date: Thu, 3 Feb 2022 02:38:09 GMT
- Title: Just Another Method to Compute MTTF from Continuous Time Markov Chain
- Authors: Eduardo M. Vasconcelos
- Abstract summary: The Meantime to Failure is a statistic used to determine how much time a system spends to enter one of its absorption states.
This work presents a method to obtain the Meantime to Failure from a Continuous Time Markov Chain models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Meantime to Failure is a statistic used to determine how much time a
system spends to enter one of its absorption states. This statistic can be used
in most areas of knowledge. In engineering, for example, can be used as a
measure of equipment reliability, and in business, as a measure of processes
performance. This work presents a method to obtain the Meantime to Failure from
a Continuous Time Markov Chain models. The method is intuitive and is simpler
to be implemented, since, it consists of solving a system of linear equations.
Related papers
- Distances for Markov chains from sample streams [16.443304244634767]
Bisimulation metrics are powerful tools for measuring similarities between processes, and specifically Markov chains.<n>Recent advances have uncovered that bisimulation metrics are, in fact, optimal-transport distances, which has enabled the development of fast algorithms for computing such metrics with provable accuracy and runtime guarantees.<n>This is often an impractical assumption in most real-world scenarios, where typically only sample trajectories are available.<n>We propose a new optimization method that addresses this limitation and estimates bisimulation metrics based on sample access, without requiring explicit transition models.
arXiv Detail & Related papers (2025-05-23T15:09:04Z) - Uncertainty quantification for Markov chains with application to temporal difference learning [63.49764856675643]
We develop novel high-dimensional concentration inequalities and Berry-Esseen bounds for vector- and matrix-valued functions of Markov chains.
We analyze the TD learning algorithm, a widely used method for policy evaluation in reinforcement learning.
arXiv Detail & Related papers (2025-02-19T15:33:55Z) - Adversarial Schrödinger Bridge Matching [66.39774923893103]
Iterative Markovian Fitting (IMF) procedure alternates between Markovian and reciprocal projections of continuous-time processes.
We propose a novel Discrete-time IMF (D-IMF) procedure in which learning of processes is replaced by learning just a few transition probabilities in discrete time.
We show that our D-IMF procedure can provide the same quality of unpaired domain translation as the IMF, using only several generation steps instead of hundreds.
arXiv Detail & Related papers (2024-05-23T11:29:33Z) - Improving Probabilistic Bisimulation for MDPs Using Machine Learning [0.0]
We propose a new technique to partition the state space of a given model to its probabilistic bisimulation classes.
The approach can decrease significantly the running time compared to state-of-the-art tools.
arXiv Detail & Related papers (2023-07-30T12:58:12Z) - Neural Continuous-Time Markov Models [2.28438857884398]
We develop a method to learn a continuous-time Markov chain's transition rate functions from fully observed time series.
We show that our method learns these transition rates with considerably more accuracy than log-linear methods.
arXiv Detail & Related papers (2022-12-11T00:07:41Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - TimeREISE: Time-series Randomized Evolving Input Sample Explanation [5.557646286040063]
TimeREISE is a model attribution method specifically aligned to success in the context of time series classification.
The method shows superior performance compared to existing approaches concerning different well-established measurements.
arXiv Detail & Related papers (2022-02-16T09:40:13Z) - Using sequential drift detection to test the API economy [4.056434158960926]
API economy refers to the widespread integration of API (advanced programming interface)
It is desirable to monitor the usage patterns and identify when the system is used in a way that was never used before.
In this work we analyze both histograms and call graph of API usage to determine if the usage patterns of the system has shifted.
arXiv Detail & Related papers (2021-11-09T13:24:19Z) - Sampling-Based Robust Control of Autonomous Systems with Non-Gaussian
Noise [59.47042225257565]
We present a novel planning method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous system into a discrete-state model that captures noise by probabilistic transitions between states.
We capture these bounds in the transition probability intervals of a so-called interval Markov decision process (iMDP)
arXiv Detail & Related papers (2021-10-25T06:18:55Z) - Contrastive learning of strong-mixing continuous-time stochastic
processes [53.82893653745542]
Contrastive learning is a family of self-supervised methods where a model is trained to solve a classification task constructed from unlabeled data.
We show that a properly constructed contrastive learning task can be used to estimate the transition kernel for small-to-mid-range intervals in the diffusion case.
arXiv Detail & Related papers (2021-03-03T23:06:47Z) - Learned Factor Graphs for Inference from Stationary Time Sequences [107.63351413549992]
We propose a framework that combines model-based algorithms and data-driven ML tools for stationary time sequences.
neural networks are developed to separately learn specific components of a factor graph describing the distribution of the time sequence.
We present an inference algorithm based on learned stationary factor graphs, which learns to implement the sum-product scheme from labeled data.
arXiv Detail & Related papers (2020-06-05T07:06:19Z) - Statistical stability indices for LIME: obtaining reliable explanations
for Machine Learning models [60.67142194009297]
The ever increasing usage of Machine Learning techniques is the clearest example of such trend.
It is very difficult to understand on what grounds the algorithm took the decision.
It is important for the practitioner to be aware of the issue, as well as to have a tool for spotting it.
arXiv Detail & Related papers (2020-01-31T10:39:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.