Edit-Based Flow Matching for Temporal Point Processes
- URL: http://arxiv.org/abs/2510.06050v2
- Date: Wed, 08 Oct 2025 10:51:35 GMT
- Title: Edit-Based Flow Matching for Temporal Point Processes
- Authors: David Lüdke, Marten Lienen, Marcel Kollovieh, Stephan Günnemann,
- Abstract summary: temporal point processes (TPPs) are a fundamental tool for modeling event sequences in continuous time.<n>Recent non-autoregressive, diffusion-style models mitigate these issues by jointly interpolating between noise and data.<n>We introduce an Edit Flow process for TPPs that transports noise to data via insert, delete, and substitute edit operations.
- Score: 51.33476564706644
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Temporal point processes (TPPs) are a fundamental tool for modeling event sequences in continuous time, but most existing approaches rely on autoregressive parameterizations that are limited by their sequential sampling. Recent non-autoregressive, diffusion-style models mitigate these issues by jointly interpolating between noise and data through event insertions and deletions in a discrete Markov chain. In this work, we generalize this perspective and introduce an Edit Flow process for TPPs that transports noise to data via insert, delete, and substitute edit operations. By learning the instantaneous edit rates within a continuous-time Markov chain framework, we attain a flexible and efficient model that effectively reduces the total number of necessary edit operations during generation. Empirical results demonstrate the generative flexibility of our unconditionally trained model in a wide range of unconditional and conditional generation tasks on benchmark TPPs.
Related papers
- Mixture of Distributions Matters: Dynamic Sparse Attention for Efficient Video Diffusion Transformers [13.366686736005699]
We present MOD-DiT, a sampling-free dynamic attention framework.<n>It accurately models evolving attention patterns through a two-stage process.<n>It overcomes the computational limitations of traditional sparse attention approaches.
arXiv Detail & Related papers (2026-01-14T16:25:39Z) - UniDiff: A Unified Diffusion Framework for Multimodal Time Series Forecasting [90.47915032778366]
We propose UniDiff, a unified diffusion framework for multimodal time series forecasting.<n>At its core lies a unified and parallel fusion module, where a single cross-attention mechanism integrates structural information from timestamps and semantic context from texts.<n>Experiments on real-world benchmark datasets across eight domains demonstrate that the proposed UniDiff model achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-12-08T05:36:14Z) - Speculative Sampling for Parametric Temporal Point Processes [9.15731236208975]
temporal point processes are powerful generative models for event sequences.<n>They are commonly specified using autoregressive models that learn the distribution of the next event from the previous events.<n>We propose a novel algorithm based on rejection sampling that enables exact sampling of multiple future values from existing TPP models.
arXiv Detail & Related papers (2025-10-22T21:20:26Z) - Edit Flows: Flow Matching with Edit Operations [25.751427330260128]
Edit Flows is a non-autoregressive model that defines a discrete flow over sequences through edit operations-insertions, deletions, and substitutions.<n>By modeling these operations within a Continuous-time Markov Chain over the sequence space, Edit Flows enable flexible, position-relative generation.
arXiv Detail & Related papers (2025-06-10T17:44:19Z) - EventFlow: Forecasting Temporal Point Processes with Flow Matching [12.976042923229466]
In machine learning it is common to model temporal point processes in an autoregressive fashion using a neural network.<n>We propose EventFlow, a non-autoregressive generative model for temporal point processes.
arXiv Detail & Related papers (2024-10-09T20:57:00Z) - Add and Thin: Diffusion for Temporal Point Processes [24.4686728569167]
ADD-THIN is a principled probabilistic denoising diffusion model for temporal point process (TPP) networks.
It operates on entire event sequences and matches state-of-the-art TPP models in density estimation.
In experiments on synthetic and real-world datasets, our model matches the state-of-the-art TPP models in density estimation and strongly outperforms them in forecasting.
arXiv Detail & Related papers (2023-11-02T10:42:35Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - AdaMerging: Adaptive Model Merging for Multi-Task Learning [68.75885518081357]
This paper introduces an innovative technique called Adaptive Model Merging (AdaMerging)
It aims to autonomously learn the coefficients for model merging, either in a task-wise or layer-wise manner, without relying on the original training data.
Compared to the current state-of-the-art task arithmetic merging scheme, AdaMerging showcases a remarkable 11% improvement in performance.
arXiv Detail & Related papers (2023-10-04T04:26:33Z) - Latent Autoregressive Source Separation [5.871054749661012]
This paper introduces vector-quantized Latent Autoregressive Source Separation (i.e., de-mixing an input signal into its constituent sources) without requiring additional gradient-based optimization or modifications of existing models.
Our separation method relies on the Bayesian formulation in which the autoregressive models are the priors, and a discrete (non-parametric) likelihood function is constructed by performing frequency counts over latent sums of addend tokens.
arXiv Detail & Related papers (2023-01-09T17:32:00Z) - DiffusER: Discrete Diffusion via Edit-based Reconstruction [88.62707047517914]
DiffusER is an edit-based generative model for text based on denoising diffusion models.
It can rival autoregressive models on several tasks spanning machine translation, summarization, and style transfer.
It can also perform other varieties of generation that standard autoregressive models are not well-suited for.
arXiv Detail & Related papers (2022-10-30T16:55:23Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.