Bayesian adaptive and interpretable functional regression for exposure
  profiles
        - URL: http://arxiv.org/abs/2203.00784v1
 - Date: Tue, 1 Mar 2022 22:52:50 GMT
 - Title: Bayesian adaptive and interpretable functional regression for exposure
  profiles
 - Authors: Yunan Gao, Daniel R. Kowal
 - Abstract summary: Pollutant exposures during gestation are a known and adverse factor for birth and health outcomes.
Using a large cohort of students in North Carolina, we study prenatal $mboxPM_2.5$ exposures recorded at near-continuous resolutions.
 - Score: 0.0
 - License: http://creativecommons.org/licenses/by-nc-nd/4.0/
 - Abstract:   Pollutant exposures during gestation are a known and adverse factor for birth
and health outcomes. However, the links between prenatal air pollution
exposures and educational outcomes are less clear, in particular the critical
windows of susceptibility during pregnancy. Using a large cohort of students in
North Carolina, we study prenatal $\mbox{PM}_{2.5}$ exposures recorded at
near-continuous resolutions and linked to 4th end-of-grade reading scores. We
develop a locally-adaptive Bayesian regression model for scalar responses with
functional and scalar predictors. The proposed model pairs a B-spline basis
expansion with dynamic shrinkage priors to capture both smooth and
rapidly-changing features in the regression surface. The local adaptivity is
manifested in more accurate point estimates and more precise uncertainty
quantification than existing methods on simulated data. The model is
accompanied by a highly scalable Gibbs sampler for fully Bayesian inference on
large datasets. In addition, we describe broad limitations with the
interpretability of scalar-on-function regression models, and introduce new
decision analysis tools to guide the model interpretation. Using these methods,
we identify a period within the third trimester as the critical window of
susceptibility to $\mbox{PM}_{2.5}$ exposure.
 
       
      
        Related papers
        - Robust Spatiotemporal Epidemic Modeling with Integrated Adaptive Outlier   Detection [7.5504472850103435]
In epidemic modeling, outliers can distort parameter estimation and lead to misguided public health decisions.<n>We introduce a robust generalized additive model (RST-GAM) to mitigate this distortion.<n>We demonstrate the practical utility of RST-GAM by analyzing county-level COVID-19 infection data in the United States.
arXiv  Detail & Related papers  (2025-07-12T19:23:25Z) - NeuralSurv: Deep Survival Analysis with Bayesian Uncertainty   Quantification [45.560812800359685]
We introduce NeuralSurv, the first deep survival model to incorporate Bayesian uncertainty quantification.<n>For efficient posterior inference, we introduce a mean-field variational algorithm with coordinate-ascent updates that scale linearly in model size.<n>In experiments, NeuralSurv delivers superior calibration compared to state-of-the-art deep survival models.
arXiv  Detail & Related papers  (2025-05-16T09:53:21Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in   Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv  Detail & Related papers  (2024-01-17T07:58:18Z) - Quantifying predictive uncertainty of aphasia severity in stroke   patients with sparse heteroscedastic Bayesian high-dimensional regression [47.1405366895538]
Sparse linear regression methods for high-dimensional data commonly assume that residuals have constant variance, which can be violated in practice.
This paper proposes estimating high-dimensional heteroscedastic linear regression models using a heteroscedastic partitioned empirical Bayes Expectation Conditional Maximization algorithm.
arXiv  Detail & Related papers  (2023-09-15T22:06:29Z) - Performative Prediction with Neural Networks [24.880495520422]
performative prediction is a framework for learning models that influence the data they intend to predict.
Standard convergence results for finding a performatively stable classifier with the method of repeated risk minimization assume that the data distribution is Lipschitz continuous to the model's parameters.
In this work, we instead assume that the data distribution is Lipschitz continuous with respect to the model's predictions, a more natural assumption for performative systems.
arXiv  Detail & Related papers  (2023-04-14T01:12:48Z) - Lazy Estimation of Variable Importance for Large Neural Networks [22.95405462638975]
We propose a fast and flexible method for approximating the reduced model with important inferential guarantees.
We demonstrate our method is fast and accurate under several data-generating regimes, and we demonstrate its real-world applicability on a seasonal climate forecasting example.
arXiv  Detail & Related papers  (2022-07-19T06:28:17Z) - Modeling High-Dimensional Data with Unknown Cut Points: A Fusion
  Penalized Logistic Threshold Regression [2.520538806201793]
In traditional logistic regression models, the link function is often assumed to be linear and continuous in predictors.
We consider a threshold model that all continuous features are discretized into ordinal levels, which further determine the binary responses.
We find the lasso model is well suited in the problem of early detection and prediction for chronic disease like diabetes.
arXiv  Detail & Related papers  (2022-02-17T04:16:40Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv  Detail & Related papers  (2021-10-09T13:56:48Z) - When in Doubt: Neural Non-Parametric Uncertainty Quantification for
  Epidemic Forecasting [70.54920804222031]
Most existing forecasting models disregard uncertainty quantification, resulting in mis-calibrated predictions.
Recent works in deep neural models for uncertainty-aware time-series forecasting also have several limitations.
We model the forecasting task as a probabilistic generative process and propose a functional neural process model called EPIFNP.
arXiv  Detail & Related papers  (2021-06-07T18:31:47Z) - Calibration of prediction rules for life-time outcomes using prognostic
  Cox regression survival models and multiple imputations to account for
  missing predictor data with cross-validatory assessment [0.0]
Methods are described to combine imputation with predictive calibration in survival modeling subject to censoring.
Prediction-averaging appears to have superior statistical properties, especially smaller predictive variation, as opposed to a direct application of Rubin's rules.
arXiv  Detail & Related papers  (2021-05-04T20:10:12Z) - Leveraging Global Parameters for Flow-based Neural Posterior Estimation [90.21090932619695]
Inferring the parameters of a model based on experimental observations is central to the scientific method.
A particularly challenging setting is when the model is strongly indeterminate, i.e., when distinct sets of parameters yield identical observations.
We present a method for cracking such indeterminacy by exploiting additional information conveyed by an auxiliary set of observations sharing global parameters.
arXiv  Detail & Related papers  (2021-02-12T12:23:13Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv  Detail & Related papers  (2020-10-27T17:56:14Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
  Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv  Detail & Related papers  (2020-06-26T13:50:19Z) - Bayesian Sampling Bias Correction: Training with the Right Loss Function [0.0]
We derive a family of loss functions to train models in the presence of sampling bias.
Examples are when the prevalence of a pathology differs from its sampling rate in the training dataset, or when a machine learning practioner rebalances their training dataset.
arXiv  Detail & Related papers  (2020-06-24T15:10:43Z) - Censored Quantile Regression Forest [81.9098291337097]
We develop a new estimating equation that adapts to censoring and leads to quantile score whenever the data do not exhibit censoring.
The proposed procedure named it censored quantile regression forest, allows us to estimate quantiles of time-to-event without any parametric modeling assumption.
arXiv  Detail & Related papers  (2020-01-08T23:20:23Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.