SCORENF: Score-based Normalizing Flows for Sampling Unnormalized distributions
- URL: http://arxiv.org/abs/2510.21330v1
- Date: Fri, 24 Oct 2025 10:43:19 GMT
- Title: SCORENF: Score-based Normalizing Flows for Sampling Unnormalized distributions
- Authors: Vikas Kanaujia, Vipul Arora,
- Abstract summary: We propose ScoreNF, a score-based learning framework built on the Normalizing Flow architecture.<n>We show that ScoreNF maintains high performance even with small training ensembles.<n>We also present a method for assessing mode-covering and mode-collapse behaviours.
- Score: 5.204468049641428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unnormalized probability distributions are central to modeling complex physical systems across various scientific domains. Traditional sampling methods, such as Markov Chain Monte Carlo (MCMC), often suffer from slow convergence, critical slowing down, poor mode mixing, and high autocorrelation. In contrast, likelihood-based and adversarial machine learning models, though effective, are heavily data-driven, requiring large datasets and often encountering mode covering and mode collapse. In this work, we propose ScoreNF, a score-based learning framework built on the Normalizing Flow (NF) architecture, integrated with an Independent Metropolis-Hastings (IMH) module, enabling efficient and unbiased sampling from unnormalized target distributions. We show that ScoreNF maintains high performance even with small training ensembles, thereby reducing reliance on computationally expensive MCMC-generated training data. We also present a method for assessing mode-covering and mode-collapse behaviours. We validate our method on synthetic 2D distributions (MOG-4 and MOG-8) and the high-dimensional $\phi^4$ lattice field theory distribution, demonstrating its effectiveness for sampling tasks.
Related papers
- Modular MeanFlow: Towards Stable and Scalable One-Step Generative Modeling [0.07646713951724012]
One-step generative modeling seeks to generate high-quality data samples in a single function evaluation.<n>In this work, we introduce Modular MeanFlow, a flexible and theoretically grounded approach for learning time-averaged velocity fields.
arXiv Detail & Related papers (2025-08-24T16:00:08Z) - Sampling by averaging: A multiscale approach to score estimation [2.012425476229879]
We introduce a novel framework for efficient sampling from complex, unnormalised target distributions by exploiting multiscale dynamics.<n>Two algorithms are developed: MultALMC and MultCDiff, based on multiscale controlled diffusions for the reverse-time Ornstein-Uhlenbeck process.<n>The framework is extended to handle heavy-dimensional target distributions using Student's t-based noise models and tailored fast-process dynamics.
arXiv Detail & Related papers (2025-08-20T21:09:34Z) - Importance Weighted Score Matching for Diffusion Samplers with Enhanced Mode Coverage [16.94974733994214]
prevailing methods often circumvent the lack of target data by optimizing reverse KL-based objectives.<n>We propose a principled approach for training diffusion-based samplers by directly targeting an objective analogous to the forward KL divergence.<n>Our approach consistently outperforms existing neural samplers across all distributional distance metrics.
arXiv Detail & Related papers (2025-05-26T02:48:26Z) - Mitigating mode collapse in normalizing flows by annealing with an adaptive schedule: Application to parameter estimation [0.6258471240250307]
We show that an adaptive schedule based on the effective sample size (ESS) can mitigate mode collapse.<n>We demonstrate that our approach can converge the marginal likelihood for a biochemical oscillator model fit to time-series data in ten-fold less time than a widely used ensemble Markov chain Monte Carlo method.
arXiv Detail & Related papers (2025-05-06T15:58:48Z) - AdvNF: Reducing Mode Collapse in Conditional Normalising Flows using Adversarial Learning [1.644043499620662]
Explicit generators, such as Normalising Flows (NFs), have been extensively applied to get unbiased samples from target distributions.
We study central problems in conditional NFs, such as high variance, mode collapse and data efficiency.
We propose adversarial training for NFs to ameliorate these problems.
arXiv Detail & Related papers (2024-01-29T08:13:51Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - PaDiM: a Patch Distribution Modeling Framework for Anomaly Detection and
Localization [64.39761523935613]
We present a new framework for Patch Distribution Modeling, PaDiM, to concurrently detect and localize anomalies in images.
PaDiM makes use of a pretrained convolutional neural network (CNN) for patch embedding.
It also exploits correlations between the different semantic levels of CNN to better localize anomalies.
arXiv Detail & Related papers (2020-11-17T17:29:18Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - Robust Finite Mixture Regression for Heterogeneous Targets [70.19798470463378]
We propose an FMR model that finds sample clusters and jointly models multiple incomplete mixed-type targets simultaneously.
We provide non-asymptotic oracle performance bounds for our model under a high-dimensional learning framework.
The results show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2020-10-12T03:27:07Z) - MMCGAN: Generative Adversarial Network with Explicit Manifold Prior [78.58159882218378]
We propose to employ explicit manifold learning as prior to alleviate mode collapse and stabilize training of GAN.
Our experiments on both the toy data and real datasets show the effectiveness of MMCGAN in alleviating mode collapse, stabilizing training, and improving the quality of generated samples.
arXiv Detail & Related papers (2020-06-18T07:38:54Z) - Learning Likelihoods with Conditional Normalizing Flows [54.60456010771409]
Conditional normalizing flows (CNFs) are efficient in sampling and inference.
We present a study of CNFs where the base density to output space mapping is conditioned on an input x, to model conditional densities p(y|x)
arXiv Detail & Related papers (2019-11-29T19:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.