Wavelet Flow For Extragalactic Foreground Simulations
- URL: http://arxiv.org/abs/2505.21220v1
- Date: Tue, 27 May 2025 14:08:28 GMT
- Title: Wavelet Flow For Extragalactic Foreground Simulations
- Authors: M. Mebratu, W. L. K. Wu,
- Abstract summary: Extragalactic foregrounds in cosmic microwave background (CMB) observations are a source of cosmological and astrophysical information and a nuisance to the CMB.<n>We explore the use of Wavelet Flow (WF) models to tackle the novel task of modeling the field-level probability distributions of CMB secondaries.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extragalactic foregrounds in cosmic microwave background (CMB) observations are both a source of cosmological and astrophysical information and a nuisance to the CMB. Effective field-level modeling that captures their non-Gaussian statistical distributions is increasingly important for optimal information extraction, particularly given the precise and low-noise observations from current and upcoming experiments. We explore the use of Wavelet Flow (WF) models to tackle the novel task of modeling the field-level probability distributions of multi-component CMB secondaries. Specifically, we jointly train correlated CMB lensing convergence ($\kappa$) and cosmic infrared background (CIB) maps with a WF model and obtain a network that statistically recovers the input to high accuracy -- the trained network generates samples of $\kappa$ and CIB fields whose average power spectra are within a few percent of the inputs across all scales, and whose Minkowski functionals are similarly accurate compared to the inputs. Leveraging the multiscale architecture of these models, we fine-tune both the model parameters and the priors at each scale independently, optimizing performance across different resolutions. These results demonstrate that WF models can accurately simulate correlated components of CMB secondaries, supporting improved analysis of cosmological data. Our code and trained models can be found here (https://github.com/matiwosm/HybridPriorWavletFlow.git).
Related papers
- Efficient Flow Matching using Latent Variables [3.5817637191799605]
We present $textttLatent-CFM$, which provides simplified training/inference strategies to incorporate multi-modal data structures.<n>We show that $textttLatent-CFM$ exhibits improved generation quality with significantly less training.
arXiv Detail & Related papers (2025-05-07T14:59:23Z) - Compact Bayesian Neural Networks via pruned MCMC sampling [0.16777183511743468]
Bayesian Neural Networks (BNNs) offer robust uncertainty quantification in model predictions, but training them presents a significant computational challenge.<n>In this study, we address some of the challenges by leveraging MCMC sampling with network pruning to obtain compact probabilistic models.<n>We ensure that the compact BNN retains its ability to estimate uncertainty via the posterior distribution while retaining the model training and generalisation performance accuracy by adapting post-pruning resampling.
arXiv Detail & Related papers (2025-01-12T22:48:04Z) - An Efficient Hierarchical Preconditioner-Learner Architecture for Reconstructing Multi-scale Basis Functions of High-dimensional Subsurface Fluid Flow [4.303037819686676]
We present an efficient hierarchical preconditioner-learner architecture that reconstructs multi-scale basis functions of high-dimensional subsurface fluid flow.
FP-HMsNet achieved an MSE of 0.0036, an MAE of 0.0375, and an R2 of 0.9716 on the testing set, significantly outperforming existing models.
This model offers a novel method for efficient and accurate subsurface fluid flow modeling, with promising potential for more complex real-world applications.
arXiv Detail & Related papers (2024-11-01T09:17:08Z) - Understanding Reinforcement Learning-Based Fine-Tuning of Diffusion Models: A Tutorial and Review [63.31328039424469]
This tutorial provides a comprehensive survey of methods for fine-tuning diffusion models to optimize downstream reward functions.
We explain the application of various RL algorithms, including PPO, differentiable optimization, reward-weighted MLE, value-weighted sampling, and path consistency learning.
arXiv Detail & Related papers (2024-07-18T17:35:32Z) - Straggler-resilient Federated Learning: Tackling Computation
Heterogeneity with Layer-wise Partial Model Training in Mobile Edge Network [4.1813760301635705]
We propose Federated Partial Model Training (FedPMT), where devices with smaller computational capabilities work on partial models and contribute to the global model.
As such, all devices in FedPMT prioritize the most crucial parts of the global model.
Empirical results show that FedPMT significantly outperforms the existing benchmark FedDrop.
arXiv Detail & Related papers (2023-11-16T16:30:04Z) - Attention based Dual-Branch Complex Feature Fusion Network for
Hyperspectral Image Classification [1.3249509346606658]
The proposed model is evaluated on the Pavia University and Salinas datasets.
Results show that the proposed model outperforms state-of-the-art methods in terms of overall accuracy, average accuracy, and Kappa.
arXiv Detail & Related papers (2023-11-02T22:31:24Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - Improving and generalizing flow-based generative models with minibatch
optimal transport [90.01613198337833]
We introduce the generalized conditional flow matching (CFM) technique for continuous normalizing flows (CNFs)
CFM features a stable regression objective like that used to train the flow in diffusion models but enjoys the efficient inference of deterministic flow models.
A variant of our objective is optimal transport CFM (OT-CFM), which creates simpler flows that are more stable to train and lead to faster inference.
arXiv Detail & Related papers (2023-02-01T14:47:17Z) - A new perspective on probabilistic image modeling [92.89846887298852]
We present a new probabilistic approach for image modeling capable of density estimation, sampling and tractable inference.
DCGMMs can be trained end-to-end by SGD from random initial conditions, much like CNNs.
We show that DCGMMs compare favorably to several recent PC and SPN models in terms of inference, classification and sampling.
arXiv Detail & Related papers (2022-03-21T14:53:57Z) - Estimating permeability of 3D micro-CT images by physics-informed CNNs
based on DNS [1.6274397329511197]
This paper presents a novel methodology for permeability prediction from micro-CT scans of geological rock samples.
The training data set for CNNs dedicated to permeability prediction consists of permeability labels that are typically generated by classical lattice Boltzmann methods (LBM)
We instead perform direct numerical simulation (DNS) by solving the stationary Stokes equation in an efficient and distributed-parallel manner.
arXiv Detail & Related papers (2021-09-04T08:43:19Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv Detail & Related papers (2020-03-10T03:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.