Anchors no more: Using peculiar velocities to constrain $H_0$ and the primordial Universe without calibrators
- URL: http://arxiv.org/abs/2504.10453v1
- Date: Mon, 14 Apr 2025 17:40:18 GMT
- Title: Anchors no more: Using peculiar velocities to constrain $H_0$ and the primordial Universe without calibrators
- Authors: Davide Piras, Francesco Sorrenti, Ruth Durrer, Martin Kunz,
- Abstract summary: We develop a novel approach to constrain the Hubble parameter $H_0$ and the primordial power spectrum amplitude $A_mathrms$ using supernovae type Ia data.<n>This yields a new independent probe of the large-scale structure based on SNIa data without distance anchors.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a novel approach to constrain the Hubble parameter $H_0$ and the primordial power spectrum amplitude $A_\mathrm{s}$ using supernovae type Ia (SNIa) data. By considering SNIa as tracers of the peculiar velocity field, we can model their distance and their covariance as a function of cosmological parameters without the need of calibrators like Cepheids; this yields a new independent probe of the large-scale structure based on SNIa data without distance anchors. Crucially, we implement a differentiable pipeline in JAX, including efficient emulators and affine sampling, reducing inference time from years to hours on a single GPU. We first validate our method on mock datasets, demonstrating that we can constrain $H_0$ and $\log 10^{10}A_\mathrm{s}$ within $\sim10\%$ using $\sim10^3$ SNIa. We then test our pipeline with SNIa from an $N$-body simulation, obtaining $7\%$-level unbiased constraints on $H_0$ with a moderate noise level. We finally apply our method to Pantheon+ data, constraining $H_0$ at the $10\%$ level without Cepheids when fixing $A_\mathrm{s}$ to its $\it{Planck}$ value. On the other hand, we obtain $15\%$-level constraints on $\log 10^{10}A_\mathrm{s}$ in agreement with $\it{Planck}$ when including Cepheids in the analysis. In light of upcoming observations of low redshift SNIa from the Zwicky Transient Facility and the Vera Rubin Legacy Survey of Space and Time, surveys for which our method will develop its full potential, we make our code publicly available.
Related papers
- Spike-and-Slab Posterior Sampling in High Dimensions [11.458504242206862]
Posterior sampling with the spike-and-slab prior [MB88] is considered the theoretical gold standard method for Bayesian sparse linear regression.<n>We give the first provable algorithms for spike-and-slab posterior sampling that apply for any SNR, and use a measurement count sub in the problem dimension.<n>We extend our result to spike-and-slab posterior sampling with Laplace diffuse densities, achieving similar guarantees when $sigma = O(frac1k)$ is bounded.
arXiv Detail & Related papers (2025-03-04T17:16:07Z) - Learning Networks from Wide-Sense Stationary Stochastic Processes [7.59499154221528]
A key inference problem here is to learn edge connectivity from node outputs (potentials)<n>We use a Whittle's maximum likelihood estimator (MLE) to learn the support of $Last$ from temporally correlated samples.<n>We show that the MLE problem is strictly convex, admitting a unique solution.
arXiv Detail & Related papers (2024-12-04T23:14:00Z) - Scaling Up Differentially Private LASSO Regularized Logistic Regression
via Faster Frank-Wolfe Iterations [51.14495595270775]
We adapt the Frank-Wolfe algorithm for $L_1$ penalized linear regression to be aware of sparse inputs and to use them effectively.
Our results demonstrate that this procedure can reduce runtime by a factor of up to $2,200times$, depending on the value of the privacy parameter $epsilon$ and the sparsity of the dataset.
arXiv Detail & Related papers (2023-10-30T19:52:43Z) - Optimal Approximation Rates for Deep ReLU Neural Networks on Sobolev and Besov Spaces [2.7195102129095003]
Deep neural networks with the ReLU activation function can approximate functions in the Sobolev spaces $Ws(L_q(Omega))$ and Besov spaces $Bs_r(L_q(Omega))$.
This problem is important when studying the application of neural networks in a variety of fields.
arXiv Detail & Related papers (2022-11-25T23:32:26Z) - Hierarchical Inference of the Lensing Convergence from Photometric
Catalogs with Bayesian Graph Neural Networks [0.0]
We introduce fluctuations on galaxy-galaxy lensing scales of $sim$1$''$ and extract random sightlines to train our BGNN.
For each test set of 1,000 sightlines, the BGNN infers the individual $kappa$ posteriors, which we combine in a hierarchical Bayesian model.
For a test field well sampled by the training set, the BGNN recovers the population mean of $kappa$ precisely and without bias.
arXiv Detail & Related papers (2022-11-15T00:29:20Z) - Learning Stochastic Shortest Path with Linear Function Approximation [74.08819218747341]
We study the shortest path (SSP) problem in reinforcement learning with linear function approximation, where the transition kernel is represented as a linear mixture of unknown models.
We propose a novel algorithm for learning the linear mixture SSP, which can attain a $tilde O(d B_star1.5sqrtK/c_min)$ regret.
arXiv Detail & Related papers (2021-10-25T08:34:00Z) - Reward-Free Model-Based Reinforcement Learning with Linear Function
Approximation [92.99933928528797]
We study the model-based reward-free reinforcement learning with linear function approximation for episodic Markov decision processes (MDPs)
In the planning phase, the agent is given a specific reward function and uses samples collected from the exploration phase to learn a good policy.
We show that to obtain an $epsilon$-optimal policy for arbitrary reward function, UCRL-RFE needs to sample at most $tilde O(H4d(H + d)epsilon-2)$ episodes.
arXiv Detail & Related papers (2021-10-12T23:03:58Z) - Threshold Phenomena in Learning Halfspaces with Massart Noise [56.01192577666607]
We study the problem of PAC learning halfspaces on $mathbbRd$ with Massart noise under Gaussian marginals.
Our results qualitatively characterize the complexity of learning halfspaces in the Massart model.
arXiv Detail & Related papers (2021-08-19T16:16:48Z) - Denoising modulo samples: k-NN regression and tightness of SDP
relaxation [5.025654873456756]
We derive a two-stage algorithm that recovers estimates of the samples $f(x_i)$ with a uniform error rate $O(fraclog nn)frac1d+2)$ holding with high probability.
The estimates of the samples $f(x_i)$ can be subsequently utilized to construct an estimate of the function $f$.
arXiv Detail & Related papers (2020-09-10T13:32:46Z) - A Randomized Algorithm to Reduce the Support of Discrete Measures [79.55586575988292]
Given a discrete probability measure supported on $N$ atoms and a set of $n$ real-valued functions, there exists a probability measure that is supported on a subset of $n+1$ of the original $N$ atoms.
We give a simple geometric characterization of barycenters via negative cones and derive a randomized algorithm that computes this new measure by "greedy geometric sampling"
We then study its properties, and benchmark it on synthetic and real-world data to show that it can be very beneficial in the $Ngg n$ regime.
arXiv Detail & Related papers (2020-06-02T16:38:36Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.