Model-agnostic basis functions for the 2-point correlation function of dark matter in linear theory
- URL: http://arxiv.org/abs/2410.21374v1
- Date: Mon, 28 Oct 2024 18:00:01 GMT
- Title: Model-agnostic basis functions for the 2-point correlation function of dark matter in linear theory
- Authors: Aseem Paranjape, Ravi K. Sheth,
- Abstract summary: We find a basis $mathcalB$ that describes $xi_rm lin(r)$ near the baryon acoustic oscillation feature in a wide class of cosmological models.
Using our basis functions in model-agnostic BAO analyses can potentially lead to significant statistical gains.
- Score: 0.0
- License:
- Abstract: We consider approximating the linearly evolved 2-point correlation function (2pcf) of dark matter $\xi_{\rm lin}(r;\boldsymbol{\theta})$ in a cosmological model with parameters $\boldsymbol{\theta}$ as the linear combination $\xi_{\rm lin}(r;\boldsymbol{\theta})\approx\sum_i\,b_i(r)\,w_i(\boldsymbol{\theta})$, where the functions $\mathcal{B}=\{b_i(r)\}$ form a $\textit{model-agnostic basis}$ for the linear 2pcf. This decomposition is important for model-agnostic analyses of the baryon acoustic oscillation (BAO) feature in the nonlinear 2pcf of galaxies that fix $\mathcal{B}$ and leave the coefficients $\{w_i\}$ free. To date, such analyses have made simple but sub-optimal choices for $\mathcal{B}$, such as monomials. We develop a machine learning framework for systematically discovering a $\textit{minimal}$ basis $\mathcal{B}$ that describes $\xi_{\rm lin}(r)$ near the BAO feature in a wide class of cosmological models. We use a custom architecture, denoted $\texttt{BiSequential}$, for a neural network (NN) that explicitly realizes the separation between $r$ and $\boldsymbol{\theta}$ above. The optimal NN trained on data in which only $\{\Omega_{\rm m},h\}$ are varied in a $\textit{flat}$ $\Lambda$CDM model produces a basis $\mathcal{B}$ comprising $9$ functions capable of describing $\xi_{\rm lin}(r)$ to $\sim0.6\%$ accuracy in $\textit{curved}$ $w$CDM models varying 7 parameters within $\sim5\%$ of their fiducial, flat $\Lambda$CDM values. Scales such as the peak, linear point and zero-crossing of $\xi_{\rm lin}(r)$ are also recovered with very high accuracy. We compare our approach to other compression schemes in the literature, and speculate that $\mathcal{B}$ may also encompass $\xi_{\rm lin}(r)$ in modified gravity models near our fiducial $\Lambda$CDM model. Using our basis functions in model-agnostic BAO analyses can potentially lead to significant statistical gains.
Related papers
- Conditional regression for the Nonlinear Single-Variable Model [4.565636963872865]
We consider a model $F(X):=f(Pi_gamma):mathbbRdto[0,rmlen_gamma]$ where $Pi_gamma: [0,rmlen_gamma]tomathbbRd$ and $f:[0,rmlen_gamma]tomathbbR1$.
We propose a nonparametric estimator, based on conditional regression, and show that it can achieve the $one$-dimensional optimal min-max rate
arXiv Detail & Related papers (2024-11-14T18:53:51Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Transformer In-Context Learning for Categorical Data [51.23121284812406]
We extend research on understanding Transformers through the lens of in-context learning with functional data by considering categorical outcomes, nonlinear underlying models, and nonlinear attention.
We present what is believed to be the first real-world demonstration of this few-shot-learning methodology, using the ImageNet dataset.
arXiv Detail & Related papers (2024-05-27T15:03:21Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - A spectral least-squares-type method for heavy-tailed corrupted
regression with unknown covariance \& heterogeneous noise [2.019622939313173]
We revisit heavy-tailed corrupted least-squares linear regression assuming to have a corrupted $n$-sized label-feature sample of at most $epsilon n$ arbitrary outliers.
We propose a near-optimal computationally tractable estimator, based on the power method, assuming no knowledge on $(Sigma,Xi) nor the operator norm of $Xi$.
arXiv Detail & Related papers (2022-09-06T23:37:31Z) - Random Graph Matching in Geometric Models: the Case of Complete Graphs [21.689343447798677]
This paper studies the problem of matching two complete graphs with edge weights correlated through latent geometries.
We derive an approximate maximum likelihood estimator, which provably achieves, with high probability, perfect recovery of $pi*$.
As a side discovery, we show that the celebrated spectral algorithm of [Ume88] emerges as a further approximation to the maximum likelihood in the geometric model.
arXiv Detail & Related papers (2022-02-22T04:14:45Z) - Self-training Converts Weak Learners to Strong Learners in Mixture
Models [86.7137362125503]
We show that a pseudolabeler $boldsymbolbeta_mathrmpl$ can achieve classification error at most $C_mathrmerr$.
We additionally show that by running gradient descent on the logistic loss one can obtain a pseudolabeler $boldsymbolbeta_mathrmpl$ with classification error $C_mathrmerr$ using only $O(d)$ labeled examples.
arXiv Detail & Related papers (2021-06-25T17:59:16Z) - An Algorithm for Learning Smaller Representations of Models With Scarce
Data [0.0]
We present a greedy algorithm for solving binary classification problems in situations where the dataset is too small or not fully representative.
It relies on a trained model with loose accuracy constraints, an iterative hyperparameter pruning procedure, and a function used to generate new data.
arXiv Detail & Related papers (2020-10-15T19:17:51Z) - Optimal Combination of Linear and Spectral Estimators for Generalized
Linear Models [59.015960528781115]
We show how to optimally combine $hatboldsymbol xrm L$ and $hatboldsymbol xrm s$.
In order to establish the limiting distribution of $(boldsymbol x, hatboldsymbol xrm L, hatboldsymbol xrm s)$, we design and analyze an Approximate Message Passing (AMP) algorithm.
arXiv Detail & Related papers (2020-08-07T18:20:05Z) - Efficient Statistics for Sparse Graphical Models from Truncated Samples [19.205541380535397]
We focus on two fundamental and classical problems: (i) inference of sparse Gaussian graphical models and (ii) support recovery of sparse linear models.
For sparse linear regression, suppose samples $(bf x,y)$ are generated where $y = bf xtopOmega* + mathcalN(0,1)$ and $(bf x, y)$ is seen only if $y$ belongs to a truncation set $S subseteq mathbbRd$.
arXiv Detail & Related papers (2020-06-17T09:21:00Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.