A Priori Determination of the Pretest Probability
- URL: http://arxiv.org/abs/2401.04086v1
- Date: Mon, 8 Jan 2024 18:44:43 GMT
- Title: A Priori Determination of the Pretest Probability
- Authors: Jacques Balayla
- Abstract summary: We introduce a novel method to estimate the pretest probability of disease, a priori, utilizing the Logit function from the logistic regression model.
In a patient presenting with signs or symptoms, the minimal bound of the pretest probability, $phi$, can be approximated by: $phi approx frac15lnleft[styleprod_theta=1ikappa_thetaright]$ where $ln$ is the natural, and $kappa_theta$ is the likelihood ratio associated with
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this manuscript, we present various proposed methods estimate the
prevalence of disease, a critical prerequisite for the adequate interpretation
of screening tests. To address the limitations of these approaches, which
revolve primarily around their a posteriori nature, we introduce a novel method
to estimate the pretest probability of disease, a priori, utilizing the Logit
function from the logistic regression model. This approach is a modification of
McGee's heuristic, originally designed for estimating the posttest probability
of disease. In a patient presenting with $n_\theta$ signs or symptoms, the
minimal bound of the pretest probability, $\phi$, can be approximated by:
$\phi \approx
\frac{1}{5}{ln\left[\displaystyle\prod_{\theta=1}^{i}\kappa_\theta\right]}$
where $ln$ is the natural logarithm, and $\kappa_\theta$ is the likelihood
ratio associated with the sign or symptom in question.
Related papers
- The Good, the Bad, and the Sampled: a No-Regret Approach to Safe Online Classification [25.36548531839979]
We study the problem of sequentially testing individuals for a binary disease outcome whose true risk is governed by an unknown logistic model.<n>Our goal is to minimize the total number of costly tests required while guaranteeing that the fraction of misclassifications does not exceed a prespecified error tolerance.<n>This establishes the first no-regret guarantees for error-constrained logistic testing, with direct applications to cost-sensitive medical screening.
arXiv Detail & Related papers (2025-10-01T15:28:00Z) - Pre-validation Revisited [79.92204034170092]
We show properties and benefits of pre-validation in prediction, inference and error estimation by simulations and applications.<n>We propose not only an analytical distribution of the test statistic for the pre-validated predictor under certain models, but also a generic bootstrap procedure to conduct inference.
arXiv Detail & Related papers (2025-05-21T00:20:14Z) - Class prior estimation for positive-unlabeled learning when label shift occurs [1.0514231683620516]
We introduce a novel direct estimator of class prior which avoids estimation of posterior probabilities.
It is based on a distribution matching technique together with kernel embedding and is obtained as an explicit solution to an optimisation task.
We study finite sample behaviour for synthetic and real data and show that the proposal, together with a suitably modified version for large values of source prior, works on par or better than its competitors.
arXiv Detail & Related papers (2025-02-28T16:12:53Z) - Gaussian credible intervals in Bayesian nonparametric estimation of the unseen [7.54430260415628]
unseen-species problem assumes $ngeq1$ samples from a population of individuals belonging to different species, possibly infinite.
We propose a novel methodology to derive large $m$ credible intervals for $K_n,m$, for any $ngeq1$.
arXiv Detail & Related papers (2025-01-27T12:48:05Z) - Counterfactual Uncertainty Quantification of Factual Estimand of Efficacy from Before-and-After Treatment Repeated Measures Randomized Controlled Trials [1.3461364647443341]
This article shows $textitcounterfactual$ uncertainty quantification (CUQ), quantifying uncertainty for factual point estimates but in a counterfactual setting, is surprisingly achievable.
We urge caution when estimate of the unobservable true condition of a patient before treatment has measurement error, because that violation of standard regression assumption can cause attenuation in estimating treatment effects.
arXiv Detail & Related papers (2024-11-14T18:01:02Z) - Testing the Feasibility of Linear Programs with Bandit Feedback [53.40256244941895]
We develop a test based on low-regret algorithms and a nonasymptotic law of iterated logarithms.
We prove that this test is reliable, and adapts to the signal level,' $Gamma,$ of any instance.
We complement this by a minimax lower bound $(Omegad/Gamma2)$ for sample costs of reliable tests.
arXiv Detail & Related papers (2024-06-21T20:56:35Z) - Variational Prediction [95.00085314353436]
We present a technique for learning a variational approximation to the posterior predictive distribution using a variational bound.
This approach can provide good predictive distributions without test time marginalization costs.
arXiv Detail & Related papers (2023-07-14T18:19:31Z) - A Novel Bayes' Theorem for Upper Probabilities [7.527234046228324]
In their seminal 1990 paper, Wasserman and Kadane establish an upper bound for the Bayes' posterior probability of a measurable set $A$.
In this paper, we introduce a generalization of their result by additionally addressing uncertainty related to the likelihood.
arXiv Detail & Related papers (2023-07-13T15:50:49Z) - Estimating Optimal Policy Value in General Linear Contextual Bandits [50.008542459050155]
In many bandit problems, the maximal reward achievable by a policy is often unknown in advance.
We consider the problem of estimating the optimal policy value in the sublinear data regime before the optimal policy is even learnable.
We present a more practical, computationally efficient algorithm that estimates a problem-dependent upper bound on $V*$.
arXiv Detail & Related papers (2023-02-19T01:09:24Z) - High Probability Bounds for a Class of Nonconvex Algorithms with AdaGrad
Stepsize [55.0090961425708]
We propose a new, simplified high probability analysis of AdaGrad for smooth, non- probability problems.
We present our analysis in a modular way and obtain a complementary $mathcal O (1 / TT)$ convergence rate in the deterministic setting.
To the best of our knowledge, this is the first high probability for AdaGrad with a truly adaptive scheme, i.e., completely oblivious to the knowledge of smoothness.
arXiv Detail & Related papers (2022-04-06T13:50:33Z) - On the Pitfalls of Heteroscedastic Uncertainty Estimation with
Probabilistic Neural Networks [23.502721524477444]
We present a synthetic example illustrating how this approach can lead to very poor but stable estimates.
We identify the culprit to be the log-likelihood loss, along with certain conditions that exacerbate the issue.
We present an alternative formulation, termed $beta$-NLL, in which each data point's contribution to the loss is weighted by the $beta$-exponentiated variance estimate.
arXiv Detail & Related papers (2022-03-17T08:46:17Z) - Uncertainty Quantification of the 4th kind; optimal posterior
accuracy-uncertainty tradeoff with the minimum enclosing ball [1.6009195333398072]
We introduce a 4th kind of approach to Uncertainty Quantification (UQ)
It can be summarized as, after observing a sample $x$, defining a likelihood region through the relative likelihood and playing a minmax game in that region to define optimal estimators and their risk.
The proposed method addresses the brittleness of Bayesian inference by navigating the robustness-accuracy tradeoff associated with data assimilation.
arXiv Detail & Related papers (2021-08-24T04:02:45Z) - SLOE: A Faster Method for Statistical Inference in High-Dimensional
Logistic Regression [68.66245730450915]
We develop an improved method for debiasing predictions and estimating frequentist uncertainty for practical datasets.
Our main contribution is SLOE, an estimator of the signal strength with convergence guarantees that reduces the computation time of estimation and inference by orders of magnitude.
arXiv Detail & Related papers (2021-03-23T17:48:56Z) - Optimal Testing of Discrete Distributions with High Probability [49.19942805582874]
We study the problem of testing discrete distributions with a focus on the high probability regime.
We provide the first algorithms for closeness and independence testing that are sample-optimal, within constant factors.
arXiv Detail & Related papers (2020-09-14T16:09:17Z) - Tracking disease outbreaks from sparse data with Bayesian inference [55.82986443159948]
The COVID-19 pandemic provides new motivation for estimating the empirical rate of transmission during an outbreak.
Standard methods struggle to accommodate the partial observability and sparse data common at finer scales.
We propose a Bayesian framework which accommodates partial observability in a principled manner.
arXiv Detail & Related papers (2020-09-12T20:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.