Improving Bayesian inference in PTA data analysis: importance nested sampling with Normalizing Flows
- URL: http://arxiv.org/abs/2511.01958v1
- Date: Mon, 03 Nov 2025 17:29:46 GMT
- Title: Improving Bayesian inference in PTA data analysis: importance nested sampling with Normalizing Flows
- Authors: Eleonora Villa, Golam Mohiuddin Shaifullah, Andrea Possenti, Carmelita Carbone,
- Abstract summary: We present a detailed study of Bayesian inference for pulsar timing array data with a focus on enhancing efficiency, robustness and speed.<n>We integrate the i-nessai sampler and benchmark its performance on realistic, simulated datasets.<n>Results highlight the potential of flow-based nested sampling to accelerate PTA analyses while preserving the quality of the inference.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a detailed study of Bayesian inference workflows for pulsar timing array data with a focus on enhancing efficiency, robustness and speed through the use of normalizing flow-based nested sampling. Building on the Enterprise framework, we integrate the i-nessai sampler and benchmark its performance on realistic, simulated datasets. We analyze its computational scaling and stability, and show that it achieves accurate posteriors and reliable evidence estimates with substantially reduced runtime, by up to three orders of magnitude depending on the dataset configuration, with respect to conventional single-core parallel-tempering MCMC analyses. These results highlight the potential of flow-based nested sampling to accelerate PTA analyses while preserving the quality of the inference.
Related papers
- Flow-Based Density Ratio Estimation for Intractable Distributions with Applications in Genomics [80.05951561886123]
We leverage condition-aware flow matching to derive a single dynamical formulation for tracking density ratios along generative trajectories.<n>We demonstrate competitive performance on simulated benchmarks for closed-form ratio estimation, and show that our method supports versatile tasks in single-cell genomics data analysis.
arXiv Detail & Related papers (2026-02-27T17:27:55Z) - Minimum Distance Summaries for Robust Neural Posterior Estimation [7.4716500353679685]
Simulation-based inference ( SBI) enables amortized Bayesian inference by first training a neural posterior estimator (NPE) on prior-simulator pairs.<n>We introduce minimum-distance summaries, a plug-in robust NPE method that adapts queried test-time summaries independently of the pretrained NPE.
arXiv Detail & Related papers (2026-02-09T20:06:15Z) - Echo State Networks for Time Series Forecasting: Hyperparameter Sweep and Benchmarking [51.56484100374058]
We evaluate whether a fully automatic, purely feedback-driven ESN can serve as a competitive alternative to widely used statistical forecasting methods.<n>Forecast accuracy is measured using MASE and sMAPE and benchmarked against simple benchmarks like drift and seasonal naive and statistical models.
arXiv Detail & Related papers (2026-02-03T16:01:22Z) - Low-Dimensional Adaptation of Rectified Flow: A New Perspective through the Lens of Diffusion and Stochastic Localization [59.04314685837778]
Rectified flow (RF) has gained considerable popularity due to its generation efficiency and state-of-the-art performance.<n>In this paper, we investigate the degree to which RF automatically adapts to the intrinsic low dimensionality of the support of the target distribution to accelerate sampling.<n>We show that, using a carefully designed choice of the time-discretization scheme and with sufficiently accurate drift estimates, the RF sampler enjoys an complexity of order $O(k/varepsilon)$.
arXiv Detail & Related papers (2026-01-21T22:09:27Z) - Estimating Time Series Foundation Model Transferability via In-Context Learning [74.65355820906355]
Time series foundation models (TSFMs) offer strong zero-shot forecasting via large-scale pre-training.<n>Fine-tuning remains critical for boosting performance in domains with limited public data.<n>We introduce TimeTic, a transferability estimation framework that recasts model selection as an in-context-learning problem.
arXiv Detail & Related papers (2025-09-28T07:07:13Z) - Flow Matching for Robust Simulation-Based Inference under Model Misspecification [11.172752919335394]
Flow Matching Corrected Posterior Estimation is a framework that refines simulation-trained posterior estimators using a small set of real calibration samples.<n>We show that our proposal consistently mitigates the effects of misspecification, delivering improved inference accuracy and uncertainty calibration compared to standard SBI baselines.
arXiv Detail & Related papers (2025-09-27T16:10:53Z) - PCA-Guided Quantile Sampling: Preserving Data Structure in Large-Scale Subsampling [0.0]
We introduce Principal Component Analysis guided Quantile Sampling (PCA QS)<n>PCA QS is a novel sampling framework designed to preserve both the statistical and geometric structure of large scale datasets.<n>We show that PCA QS consistently outperforms simple random sampling, yielding better structure and improved downstream model performance.
arXiv Detail & Related papers (2025-06-23T02:37:05Z) - Bounds in Wasserstein Distance for Locally Stationary Processes [0.29771206318712146]
We introduce a novel conditional probability distribution estimator specifically tailored for locally stationary (LSP) data.<n>We rigorously establish convergence rates for the NW-based conditional probability estimator under the Wasserstein metric.<n>We conduct extensive numerical simulations on synthetic datasets and provide empirical validations using real-world data.
arXiv Detail & Related papers (2024-12-04T15:51:22Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Rethinking Clustered Federated Learning in NOMA Enhanced Wireless
Networks [60.09912912343705]
This study explores the benefits of integrating the novel clustered federated learning (CFL) approach with non-independent and identically distributed (non-IID) datasets.
A detailed theoretical analysis of the generalization gap that measures the degree of non-IID in the data distribution is presented.
Solutions to address the challenges posed by non-IID conditions are proposed with the analysis of the properties.
arXiv Detail & Related papers (2024-03-05T17:49:09Z) - Data Attribution for Diffusion Models: Timestep-induced Bias in Influence Estimation [53.27596811146316]
Diffusion models operate over a sequence of timesteps instead of instantaneous input-output relationships in previous contexts.
We present Diffusion-TracIn that incorporates this temporal dynamics and observe that samples' loss gradient norms are highly dependent on timestep.
We introduce Diffusion-ReTrac as a re-normalized adaptation that enables the retrieval of training samples more targeted to the test sample of interest.
arXiv Detail & Related papers (2024-01-17T07:58:18Z) - Optimal Sampling Designs for Multi-dimensional Streaming Time Series
with Application to Power Grid Sensor Data [4.891140022708977]
We study the data-dependent sample selection and online inference problem for a multi-dimensional streaming time series.
Inspired by D-optimality criterion in design of experiments, we propose a class of online data reduction methods.
We show that the optimal solution amounts to a strategy that is a mixture of Bernoulli sampling and leverage score sampling.
arXiv Detail & Related papers (2023-03-14T21:26:30Z) - SynBench: Task-Agnostic Benchmarking of Pretrained Representations using
Synthetic Data [78.21197488065177]
Recent success in fine-tuning large models, that are pretrained on broad data at scale, on downstream tasks has led to a significant paradigm shift in deep learning.
This paper proposes a new task-agnostic framework, textitSynBench, to measure the quality of pretrained representations using synthetic data.
arXiv Detail & Related papers (2022-10-06T15:25:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.