Stochastic Process Learning via Operator Flow Matching
- URL: http://arxiv.org/abs/2501.04126v2
- Date: Thu, 09 Jan 2025 02:20:28 GMT
- Title: Stochastic Process Learning via Operator Flow Matching
- Authors: Yaozhong Shi, Zachary E. Ross, Domniki Asimaki, Kamyar Azizzadenesheli,
- Abstract summary: We develop operator flow matching (OFM) for learning process priors on function spaces.
OFM provides the probability density of the values of any collection of points.
Our method outperforms state-of-the-art models in process learning, functional regression, and prior learning.
- Score: 12.275587079383603
- License:
- Abstract: Expanding on neural operators, we propose a novel framework for stochastic process learning across arbitrary domains. In particular, we develop operator flow matching (OFM) for learning stochastic process priors on function spaces. OFM provides the probability density of the values of any collection of points and enables mathematically tractable functional regression at new points with mean and density estimation. Our method outperforms state-of-the-art models in stochastic process learning, functional regression, and prior learning.
Related papers
- Probabilities-Informed Machine Learning [0.0]
This study introduces an ML paradigm inspired by domain knowledge of the structure of output function, akin to physics-informed ML.
The proposed approach integrates the probabilistic structure of the target variable into the training process.
It enhances model accuracy and mitigates risks of overfitting and underfitting.
arXiv Detail & Related papers (2024-12-16T08:01:22Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Likelihood-based inference and forecasting for trawl processes: a
stochastic optimization approach [0.0]
We develop the first likelihood-based methodology for the inference of real-valued trawl processes.
We introduce novel deterministic and probabilistic forecasting methods.
We release a Python library which can be used to fit a large class of trawl processes.
arXiv Detail & Related papers (2023-08-30T15:37:48Z) - Value-Distributional Model-Based Reinforcement Learning [59.758009422067]
Quantifying uncertainty about a policy's long-term performance is important to solve sequential decision-making tasks.
We study the problem from a model-based Bayesian reinforcement learning perspective.
We propose Epistemic Quantile-Regression (EQR), a model-based algorithm that learns a value distribution function.
arXiv Detail & Related papers (2023-08-12T14:59:19Z) - Distributional GFlowNets with Quantile Flows [73.73721901056662]
Generative Flow Networks (GFlowNets) are a new family of probabilistic samplers where an agent learns a policy for generating complex structure through a series of decision-making steps.
In this work, we adopt a distributional paradigm for GFlowNets, turning each flow function into a distribution, thus providing more informative learning signals during training.
Our proposed textitquantile matching GFlowNet learning algorithm is able to learn a risk-sensitive policy, an essential component for handling scenarios with risk uncertainty.
arXiv Detail & Related papers (2023-02-11T22:06:17Z) - Modeling Temporal Data as Continuous Functions with Stochastic Process
Diffusion [2.2849153854336763]
temporal data can be viewed as discretized measurements of the underlying function.
To build a generative model for such data we have to model the process that governs it.
We propose a solution by defining the denoising diffusion model in the function space.
arXiv Detail & Related papers (2022-11-04T17:02:01Z) - MARS: Meta-Learning as Score Matching in the Function Space [79.73213540203389]
We present a novel approach to extracting inductive biases from a set of related datasets.
We use functional Bayesian neural network inference, which views the prior as a process and performs inference in the function space.
Our approach can seamlessly acquire and represent complex prior knowledge by metalearning the score function of the data-generating process.
arXiv Detail & Related papers (2022-10-24T15:14:26Z) - An active learning approach for improving the performance of equilibrium
based chemical simulations [0.0]
In this paper, we propose a novel sequential data-driven method for dealing with equilibrium based chemical simulations.
The proposed method sequentially chooses the most relevant input data at which the function to estimate has to be evaluated to build a surrogate model.
Our active learning method is validated through numerical experiments and applied to a complex chemical system commonly used in geoscience.
arXiv Detail & Related papers (2021-10-15T14:17:28Z) - On Contrastive Representations of Stochastic Processes [53.21653429290478]
Learning representations of processes is an emerging problem in machine learning.
We show that our methods are effective for learning representations of periodic functions, 3D objects and dynamical processes.
arXiv Detail & Related papers (2021-06-18T11:00:24Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.