Bayesian Inference for Left-Truncated Log-Logistic Distributions for Time-to-event Data Analysis
- URL: http://arxiv.org/abs/2506.17852v1
- Date: Sat, 21 Jun 2025 23:10:07 GMT
- Title: Bayesian Inference for Left-Truncated Log-Logistic Distributions for Time-to-event Data Analysis
- Authors: Fahad Mostafa, Md Rejuan Haque, Md Mostafijur Rahman, Farzana Nasrin,
- Abstract summary: We propose a Bayesian approach for estimating the parameters of the left-truncated log-logistic (LTLL) distribution.<n>We show that it provides more stable and reliable parameter estimates, particularly when the likelihood surface is irregular due to left truncation.
- Score: 0.19999259391104385
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Parameter estimation is a foundational step in statistical modeling, enabling us to extract knowledge from data and apply it effectively. Bayesian estimation of parameters incorporates prior beliefs with observed data to infer distribution parameters probabilistically and robustly. Moreover, it provides full posterior distributions, allowing uncertainty quantification and regularization, especially useful in small or truncated samples. Utilizing the left-truncated log-logistic (LTLL) distribution is particularly well-suited for modeling time-to-event data where observations are subject to a known lower bound such as precipitation data and cancer survival times. In this paper, we propose a Bayesian approach for estimating the parameters of the LTLL distribution with a fixed truncation point \( x_L > 0 \). Given a random variable \( X \sim LL(\alpha, \beta; x_L) \), where \( \alpha > 0 \) is the scale parameter and \( \beta > 0 \) is the shape parameter, the likelihood function is derived based on a truncated sample \( X_1, X_2, \dots, X_N \) with \( X_i > x_L \). We assume independent prior distributions for the parameters, and the posterior inference is conducted via Markov Chain Monte Carlo sampling, specifically using the Metropolis-Hastings algorithm to obtain posterior estimates \( \hat{\alpha} \) and \( \hat{\beta} \). Through simulation studies and real-world applications, we demonstrate that Bayesian estimation provides more stable and reliable parameter estimates, particularly when the likelihood surface is irregular due to left truncation. The results highlight the advantages of Bayesian inference outperform the estimation of parameter uncertainty in truncated distributions for time to event data analysis.
Related papers
- Weighted Leave-One-Out Cross Validation [0.0]
We present a weighted version of Leave-One-Out (LOO) cross-validation for estimating the Integrated Squared Error (ISE)<n>The method relies on the construction of the best linear estimator of the squared prediction error at an arbitrary unsampled site.<n>Overall, the estimation of ISE is significantly more precise than with classical, unweighted, LOO cross validation.
arXiv Detail & Related papers (2025-05-26T09:20:34Z) - CLT and Edgeworth Expansion for m-out-of-n Bootstrap Estimators of The Studentized Median [4.174296652683762]
The m-out-of-n bootstrap approximates the distribution of a statistic by repeatedly drawing m subsamples without replacement from an original sample of size n.<n>Despite its broad applicability across econometrics, biostatistics, and machine learning, rigorous parameter-free guarantees for the soundness of the bootstrap have remained elusive.<n>This paper establishes such guarantees by analyzing the estimator of sample quantiles obtained from m-out-of-n resampling of a dataset of size n.
arXiv Detail & Related papers (2025-05-16T22:14:49Z) - Learning Survival Distributions with the Asymmetric Laplace Distribution [16.401141867387324]
We propose a parametric survival analysis method based on the Asymmetric Laplace Distribution (ALD)<n>This distribution allows for closed-form calculation of popular event summaries such as mean, median, mode, variation, and quantiles.<n>We show that the proposed method outperforms parametric and nonparametric approaches in terms of accuracy, discrimination and calibration.
arXiv Detail & Related papers (2025-05-06T17:34:41Z) - Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.<n>We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.<n>Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - Transformer-based Parameter Estimation in Statistics [0.0]
We propose a transformer-based approach to parameter estimation.
It does not even require knowing the probability density function, which is needed by numerical methods.
It is shown that our approach achieves similar or better accuracy as measured by mean-square-errors.
arXiv Detail & Related papers (2024-02-28T04:30:41Z) - Score Matching-based Pseudolikelihood Estimation of Neural Marked
Spatio-Temporal Point Process with Uncertainty Quantification [59.81904428056924]
We introduce SMASH: a Score MAtching estimator for learning markedPs with uncertainty quantification.
Specifically, our framework adopts a normalization-free objective by estimating the pseudolikelihood of markedPs through score-matching.
The superior performance of our proposed framework is demonstrated through extensive experiments in both event prediction and uncertainty quantification.
arXiv Detail & Related papers (2023-10-25T02:37:51Z) - Bayesian Analysis for Over-parameterized Linear Model via Effective Spectra [6.9060054915724]
We introduce a data-adaptive Gaussian prior that targets the data's intrinsic complexity rather than its ambient dimension.<n>We establish contraction rates of the corresponding posterior distribution, which reveal how the mass in the spectrum affects the prediction error bounds.<n>Our findings demonstrate that Bayesian methods leveraging spectral information of the data are effective for estimation in non-sparse, high-dimensional settings.
arXiv Detail & Related papers (2023-05-25T06:07:47Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Statistical Efficiency of Score Matching: The View from Isoperimetry [96.65637602827942]
We show a tight connection between statistical efficiency of score matching and the isoperimetric properties of the distribution being estimated.
We formalize these results both in the sample regime and in the finite regime.
arXiv Detail & Related papers (2022-10-03T06:09:01Z) - Learning Summary Statistics for Bayesian Inference with Autoencoders [58.720142291102135]
We use the inner dimension of deep neural network based Autoencoders as summary statistics.
To create an incentive for the encoder to encode all the parameter-related information but not the noise, we give the decoder access to explicit or implicit information that has been used to generate the training data.
arXiv Detail & Related papers (2022-01-28T12:00:31Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.