Quantifying Weighted Morphological Content of Large-Scale Structures via Simulation-Based Inference
- URL: http://arxiv.org/abs/2511.03636v1
- Date: Wed, 05 Nov 2025 16:54:17 GMT
- Title: Quantifying Weighted Morphological Content of Large-Scale Structures via Simulation-Based Inference
- Authors: M. H. Jalali Kanafi, S. M. S. Movahed,
- Abstract summary: We compare the constraining power of two higher-order summary statistics of the large-scale structure (LSS)<n>We find that the Conditional Moments of Derivative (CMD) yields tighter forecasts for $(Omega_m,,sigma_8)$ than the zeroth- to third-order MFs components.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we perform a simulation-based forecasting analysis to compare the constraining power of two higher-order summary statistics of the large-scale structure (LSS), the Minkowski Functionals (MFs) and the Conditional Moments of Derivative (CMD), with a particular focus on their sensitivity to nonlinear and anisotropic features in redshift-space. Our analysis relies on halo catalogs from the Big Sobol Sequence(BSQ) simulations at redshift $z=0.5$, employing a likelihood-free inference framework implemented via neural posterior estimation. At the fiducial cosmology of the Quijote simulations $(\Omega_{m}=0.3175,\,\sigma_{8}=0.834)$, and for the smoothing scale $R=15\,h^{-1}$Mpc, we find that the CMD yields tighter forecasts for $(\Omega_{m}},\,\sigma_{8})$ than the zeroth- to third-order MFs components, improving the constraint precision by ${\sim}(44\%,\,52\%)$, ${\sim}(30\%,\,45\%)$, ${\sim}(27\%,\,17\%)$, and ${\sim}(26\%,\,17\%)$, respectively. A joint configuration combining the MFs and CMD further enhances the precision by approximately ${\sim}27\%$ compared to the standard MFs alone, highlighting the complementary anisotropy-sensitive information captured by the CMD in contrast to the scalar morphological content encapsulated by the MFs. We further extend the forecasting analysis to a continuous range of cosmological parameter values and multiple smoothing scales. Our results show that, although the absolute forecast uncertainty for each component of summary statistics depends on the underlying parameter values and the adopted smoothing scale, the relative constraining power among the summary statistics remains nearly constant throughout.
Related papers
- Stability and Generalization of Push-Sum Based Decentralized Optimization over Directed Graphs [55.77845440440496]
Push-based decentralized communication enables optimization over communication networks, where information exchange may be asymmetric.<n>We develop a unified uniform-stability framework for the Gradient Push (SGP) algorithm.<n>A key technical ingredient is an imbalance-aware generalization bound through two quantities.
arXiv Detail & Related papers (2026-02-24T05:32:03Z) - Latent Object Permanence: Topological Phase Transitions, Free-Energy Principles, and Renormalization Group Flows in Deep Transformer Manifolds [0.5729426778193398]
We study the emergence of multi-step reasoning in deep Transformer language models through a geometric and statistical-physics lens.<n>We formalize the forward pass as a discrete coarse-graining map and relate the appearance of stable "concept basins" to fixed points of this renormalization-like dynamics.<n>The resulting low-entropy regime is characterized by a spectral tail collapse and by the formation of transient, reusable object-like structures in representation space.
arXiv Detail & Related papers (2026-01-16T23:11:02Z) - Machine Learning-Driven Analysis of kSZ Maps to Predict CMB Optical Depth $τ$ [0.0]
Upcoming measurements of the kinetic Sunyaev-Zel'dovich (kSZ) effect offer a powerful probe of the Epoch of Reionization (EoR)<n>The weak kSZ signal is difficult to extract from CMB observations due to significant contamination from astrophysical foregrounds.<n>We present a machine learning approach to extract $tau$ from simulated kSZ maps.
arXiv Detail & Related papers (2025-11-06T19:41:16Z) - URDF-Anything: Constructing Articulated Objects with 3D Multimodal Language Model [76.08429266631823]
We propose an end-to-end automatic reconstruction framework based on a 3D multimodal large language model (MLLM)<n>URDF-Anything utilizes an autoregressive prediction framework based on point-cloud and text multimodal input to jointly optimize geometric segmentation and kinematic parameter prediction.<n> Experiments on both simulated and real-world datasets demonstrate that our method significantly outperforms existing approaches.
arXiv Detail & Related papers (2025-11-02T13:45:51Z) - Global Convergence of Gradient EM for Over-Parameterized Gaussian Mixtures [53.51230405648361]
We study the dynamics of gradient EM and employ tensor decomposition to characterize the geometric landscape of the likelihood loss.<n>This is the first global convergence and recovery result for EM or gradient EM beyond the special case of $m=2$.
arXiv Detail & Related papers (2025-06-06T23:32:38Z) - Cosmological Analysis with Calibrated Neural Quantile Estimation and Approximate Simulators [0.0]
We introduce a new Simulation-Based Inference ( SBI) method that leverages a large number of approximate simulations for training and a small number of high-fidelity simulations for calibration.
As a proof of concept, we demonstrate that cosmological parameters can be inferred at field level from projected 2-dim dark matter density maps up to $k_rm maxsim1.5,h$/Mpc at $z=0$.
The calibrated posteriors closely match those obtained by directly training on $sim104$ expensive Particle-Particle (PP) simulations, but at a fraction of the computational cost
arXiv Detail & Related papers (2024-11-22T05:53:46Z) - Beyond likelihood ratio bias: Nested multi-time-scale stochastic approximation for likelihood-free parameter estimation [49.78792404811239]
We study inference in simulation-based models where the analytical form of the likelihood is unknown.<n>We use a ratio-free nested multi-time-scale approximation (SA) method that simultaneously tracks the score and drives the parameter update.<n>We show that our algorithm can eliminate the original bias $Obig(sqrtfrac1Nbig)$ and accelerate the convergence rate from $Obig(beta_k+sqrtfracalpha_kNbig)$.
arXiv Detail & Related papers (2024-11-20T02:46:15Z) - Learning large softmax mixtures with warm start EM [17.081578976570437]
Softmax mixture models (SMMs) are discrete $K$-mixtures introduced to model the probability of choosing an $x_j in RRL$ from $p$ candidates.<n>This paper provides a comprehensive analysis of the EM algorithm for SMMs in high dimensions.
arXiv Detail & Related papers (2024-09-16T00:14:48Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Topological Echoes of Primordial Physics in the Universe at Large Scales [0.0]
We compute persistence diagrams and derived statistics for simulations of dark matter halos.
Our pipeline computes persistence in sub-boxes of full simulations and simulations are subsampled to uniform halo number.
arXiv Detail & Related papers (2020-12-07T12:08:55Z) - A Contour Stochastic Gradient Langevin Dynamics Algorithm for
Simulations of Multi-modal Distributions [17.14287157979558]
We propose an adaptively weighted gradient Langevin dynamics (SGLD) for learning in big data statistics.
The proposed algorithm is tested on benchmark datasets including CIFAR100.
arXiv Detail & Related papers (2020-10-19T19:20:47Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z) - The Heavy-Tail Phenomenon in SGD [7.366405857677226]
We show that depending on the structure of the Hessian of the loss at the minimum, the SGD iterates will converge to a emphheavy-tailed stationary distribution.
We translate our results into insights about the behavior of SGD in deep learning.
arXiv Detail & Related papers (2020-06-08T16:43:56Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.