E-QRGMM: Efficient Generative Metamodeling for Covariate-Dependent Uncertainty Quantification
- URL: http://arxiv.org/abs/2601.19256v1
- Date: Tue, 27 Jan 2026 06:39:24 GMT
- Title: E-QRGMM: Efficient Generative Metamodeling for Covariate-Dependent Uncertainty Quantification
- Authors: Zhiyang Liang, Qingkai Zhang,
- Abstract summary: Efficient Quantile-Regression-Based Generative Metamodeling (E-QRGMM) is a novel framework that accelerates the quantile-regression-based generative metamodeling (QRGMM)<n>We show that E-QRGMM preserves the convergence rate of the original QRGMM while reducing grid complexity from $O(n1/2)$ to $O(n1/5)$ for the majority of quantile levels.<n>E-QRGMM achieves a superior trade-off between distributional accuracy and training speed compared to both QRGMM and other advanced deep generative
- Score: 0.5371337604556311
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Covariate-dependent uncertainty quantification in simulation-based inference is crucial for high-stakes decision-making but remains challenging due to the limitations of existing methods such as conformal prediction and classical bootstrap, which struggle with covariate-specific conditioning. We propose Efficient Quantile-Regression-Based Generative Metamodeling (E-QRGMM), a novel framework that accelerates the quantile-regression-based generative metamodeling (QRGMM) approach by integrating cubic Hermite interpolation with gradient estimation. Theoretically, we show that E-QRGMM preserves the convergence rate of the original QRGMM while reducing grid complexity from $O(n^{1/2})$ to $O(n^{1/5})$ for the majority of quantile levels, thereby substantially improving computational efficiency. Empirically, E-QRGMM achieves a superior trade-off between distributional accuracy and training speed compared to both QRGMM and other advanced deep generative models on synthetic and practical datasets. Moreover, by enabling bootstrap-based construction of confidence intervals for arbitrary estimands of interest, E-QRGMM provides a practical solution for covariate-dependent uncertainty quantification.
Related papers
- Quantization-Aware Collaborative Inference for Large Embodied AI Models [67.66340659245186]
Large artificial intelligence models (LAIMs) are increasingly regarded as a core intelligence engine for embodied AI applications.<n>To address this issue, we investigate quantization-aware collaborative inference (co-inference) for embodied AI systems.
arXiv Detail & Related papers (2026-02-13T16:08:19Z) - Fast Model Selection and Stable Optimization for Softmax-Gated Multinomial-Logistic Mixture of Experts Models [40.216463162163976]
We develop a batch minorization-maximization algorithm for softmax-gated multinomial-logistic MoE.<n>We also prove finite-sample rates for conditional density estimation and parameter recovery.<n>Experiments on biological protein--protein interaction prediction validate the full pipeline.
arXiv Detail & Related papers (2026-02-08T14:45:41Z) - Quantum solver for single-impurity Anderson models with particle-hole symmetry [1.4222334190789556]
A central computational bottleneck in DMFT is in solving the Anderson impurity model (AIM)<n>We develop and benchmark a quantum-classical hybrid solver tailored for DMFT applications.<n>We evaluate the performance of this approach across a few bath sizes and interaction strengths under noisy, shot-limited conditions.
arXiv Detail & Related papers (2026-01-15T17:02:34Z) - Model-Based Reinforcement Learning in Discrete-Action Non-Markovian Reward Decision Processes [46.91576262410701]
We present a novel model-based algorithm for discrete NMRDPs that factorizes Markovian transition learning from non-Markovian reward handling via reward machines.<n>We experimentally compare our method with modern state-of-the-art model-based RL approaches on environments of increasing complexity.
arXiv Detail & Related papers (2025-12-16T17:26:24Z) - Robust Iterative Learning Hidden Quantum Markov Models [0.7493761475572844]
Hidden Quantum Markov Models (HQMMs) extend classical Hidden Markov Models to the quantum domain.<n>Existing HQMM learning algorithms are sensitive to data corruption and lack mechanisms to ensure robustness under adversarial perturbations.<n>We introduce the Adversarially Corrupted HQMM, which formalizes robustness analysis by allowing a controlled fraction of observation sequences to be adversarially corrupted.
arXiv Detail & Related papers (2025-10-27T11:48:44Z) - Neural Optimal Transport Meets Multivariate Conformal Prediction [58.43397908730771]
We propose a framework for conditional vectorile regression (CVQR)<n>CVQR combines neural optimal transport with quantized optimization, and apply it to predictions.
arXiv Detail & Related papers (2025-09-29T19:50:19Z) - WSM: Decay-Free Learning Rate Schedule via Checkpoint Merging for LLM Pre-training [64.0932926819307]
We present Warmup-Stable and Merge (WSM), a framework that establishes a formal connection between learning rate decay and model merging.<n>WSM provides a unified theoretical foundation for emulating various decay strategies.<n>Our framework consistently outperforms the widely-adopted Warmup-Stable-Decay (WSD) approach across multiple benchmarks.
arXiv Detail & Related papers (2025-07-23T16:02:06Z) - MPQ-DMv2: Flexible Residual Mixed Precision Quantization for Low-Bit Diffusion Models with Temporal Distillation [74.34220141721231]
We present MPQ-DMv2, an improved textbfMixed textbfPrecision textbfQuantization framework for extremely low-bit textbfDiffusion textbfModels.
arXiv Detail & Related papers (2025-07-06T08:16:50Z) - Conditional Mean and Variance Estimation via \ extit{k}-NN Algorithm with Automated Variance Selection [9.943131787772323]
We introduce a novel textitk-nearest neighbor (textitk-NN) regression method for joint estimation of the conditional mean and variance.<n>The proposed algorithm preserves the computational efficiency and manifold-learning capabilities of classical non-parametric textitk-NN models.
arXiv Detail & Related papers (2024-02-02T18:54:18Z) - Simulation-Based Inference with Quantile Regression [0.0]
We present Neural Quantile Estimation (NQE), a novel Simulation-Based Inference ( SBI) method based on conditional quantile regression.
NQE autoregressively learns individual one dimensional quantiles for each posterior dimension, conditioned on the data and previous posterior dimensions.
We demonstrate NQE achieves state-of-the-art performance on a variety of benchmark problems.
arXiv Detail & Related papers (2024-01-04T18:53:50Z) - Information Theoretic Structured Generative Modeling [13.117829542251188]
A novel generative model framework called the structured generative model (SGM) is proposed that makes straightforward optimization possible.
The implementation employs a single neural network driven by an orthonormal input to a single white noise source adapted to learn an infinite Gaussian mixture model.
Preliminary results show that SGM significantly improves MINE estimation in terms of data efficiency and variance, conventional and variational Gaussian mixture models, as well as for training adversarial networks.
arXiv Detail & Related papers (2021-10-12T07:44:18Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z) - Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning [66.18202188565922]
We propose a communication-efficient decentralized machine learning (ML) algorithm, coined QGADMM (QGADMM)<n>We develop a novel quantization method to adaptively adjust modelization levels and their probabilities, while proving the convergence of QGADMM for convex functions.
arXiv Detail & Related papers (2019-10-23T10:47:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.