Stochastic Interpolants in Hilbert Spaces
- URL: http://arxiv.org/abs/2602.01988v1
- Date: Mon, 02 Feb 2026 11:44:34 GMT
- Title: Stochastic Interpolants in Hilbert Spaces
- Authors: James Boran Yu, RuiKang OuYang, Julien Horwood, José Miguel Hernández-Lobato,
- Abstract summary: interpolants offer a flexible way to bridge arbitrary distributions.<n>This paper establishes a rigorous framework for Hilbert interpolants in infinite-dimensional spaces.<n>We demonstrate the effectiveness of the proposed framework for conditional generation.
- Score: 22.77471216660321
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although diffusion models have successfully extended to function-valued data, stochastic interpolants -- which offer a flexible way to bridge arbitrary distributions -- remain limited to finite-dimensional settings. This work bridges this gap by establishing a rigorous framework for stochastic interpolants in infinite-dimensional Hilbert spaces. We provide comprehensive theoretical foundations, including proofs of well-posedness and explicit error bounds. We demonstrate the effectiveness of the proposed framework for conditional generation, focusing particularly on complex PDE-based benchmarks. By enabling generative bridges between arbitrary functional distributions, our approach achieves state-of-the-art results, offering a powerful, general-purpose tool for scientific discovery.
Related papers
- SetPO: Set-Level Policy Optimization for Diversity-Preserving LLM Reasoning [50.93295951454092]
We introduce a set level diversity objective defined over sampled trajectories using kernelized similarity.<n>Our approach derives a leave-one-out marginal contribution for each sampled trajectory and integrates this objective as a plug-in advantage shaping term for policy optimization.<n>Experiments across a range of model scales demonstrate the effectiveness of our proposed algorithm, consistently outperforming strong baselines in both Pass@1 and Pass@K across various benchmarks.
arXiv Detail & Related papers (2026-02-01T07:13:20Z) - Functional Adjoint Sampler: Scalable Sampling on Infinite Dimensional Spaces [22.412483650808728]
We present an optimal control-based diffusion sampler for infinite-dimensional function spaces.<n>We show that it achieves superior transition path sampling performance across synthetic potential and real molecular systems.
arXiv Detail & Related papers (2025-11-09T05:51:03Z) - Graph-based Clustering Revisited: A Relaxation of Kernel $k$-Means Perspective [73.18641268511318]
We propose a graph-based clustering algorithm that only relaxes the orthonormal constraint to derive clustering results.<n>To ensure a doubly constraint into a gradient, we transform the non-negative constraint into a class probability parameter.
arXiv Detail & Related papers (2025-09-23T09:14:39Z) - Discrete Markov Bridge [93.64996843697278]
We propose a novel framework specifically designed for discrete representation learning, called Discrete Markov Bridge.<n>Our approach is built upon two key components: Matrix Learning and Score Learning.
arXiv Detail & Related papers (2025-05-26T09:32:12Z) - Guided Diffusion Sampling on Function Spaces with Applications to PDEs [112.09025802445329]
We propose a general framework for conditional sampling in PDE-based inverse problems.<n>This is accomplished by a function-space diffusion model and plug-and-play guidance for conditioning.<n>Our method achieves an average 32% accuracy improvement over state-of-the-art fixed-resolution diffusion baselines.
arXiv Detail & Related papers (2025-05-22T17:58:12Z) - Blessing of Dimensionality for Approximating Sobolev Classes on Manifolds [14.183849746284816]
We consider optimal uniform approximations with functions of finite statistical complexity.<n>In particular, we demonstrate that the statistical complexity required to approximate a class of bounded Sobolev functions on a compact manifold is bounded from below.
arXiv Detail & Related papers (2024-08-13T15:56:42Z) - A Unified Theory of Stochastic Proximal Point Methods without Smoothness [52.30944052987393]
Proximal point methods have attracted considerable interest owing to their numerical stability and robustness against imperfect tuning.
This paper presents a comprehensive analysis of a broad range of variations of the proximal point method (SPPM)
arXiv Detail & Related papers (2024-05-24T21:09:19Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Reflected Schr\"odinger Bridge for Constrained Generative Modeling [16.72888494254555]
Reflected diffusion models have become the go-to method for large-scale generative models in real-world applications.
We introduce the Reflected Schrodinger Bridge algorithm: an entropy-regularized optimal transport approach tailored generating data within diverse bounded domains.
Our algorithm yields robust generative modeling in diverse domains, and its scalability is demonstrated in real-world constrained generative modeling through standard image benchmarks.
arXiv Detail & Related papers (2024-01-06T14:39:58Z) - PAC-Chernoff Bounds: Understanding Generalization in the Interpolation Regime [6.645111950779666]
This paper introduces a distribution-dependent PAC-Chernoff bound that exhibits perfect tightness for interpolators.<n>We present a unified theoretical framework revealing why certain interpolators show an exceptional generalization, while others falter.
arXiv Detail & Related papers (2023-06-19T14:07:10Z) - Risk Bounds for Learning via Hilbert Coresets [1.0312968200748116]
We explicitly compute tight and meaningful bounds for complex hypothesis classes.
We develop a formalism for constructing upper bounds on the expected full sample risk for supervised classification tasks.
arXiv Detail & Related papers (2021-03-29T12:39:48Z) - Learning Inconsistent Preferences with Gaussian Processes [14.64963271587818]
We revisit widely used preferential Gaussian processes by Chu et al.(2005) and challenge their modelling assumption that imposes rankability of data items via latent utility function values.
We propose a generalisation of pgp which can capture more expressive latent preferential structures in the data.
Our experimental findings support the conjecture that violations of rankability are ubiquitous in real-world preferential data.
arXiv Detail & Related papers (2020-06-06T11:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.