SplitMeanFlow: Interval Splitting Consistency in Few-Step Generative Modeling
- URL: http://arxiv.org/abs/2507.16884v1
- Date: Tue, 22 Jul 2025 16:26:58 GMT
- Title: SplitMeanFlow: Interval Splitting Consistency in Few-Step Generative Modeling
- Authors: Yi Guo, Wei Wang, Zhihang Yuan, Rong Cao, Kuan Chen, Zhengyang Chen, Yuanyuan Huo, Yang Zhang, Yuping Wang, Shouda Liu, Yuxuan Wang,
- Abstract summary: Generative models like Flow Matching have achieved state-of-the-art performance but are often hindered by a computationally expensive iterative sampling process.<n>MeanFlow, a leading method in this area, learns this field by enforcing a differential identity that connects the average and instantaneous velocities.<n>In this work, we argue that this differential formulation is a limiting special case of a more fundamental principle.<n>We introduce SplitMeanFlow, a new training framework that enforces this algebraic consistency directly as a learning objective.
- Score: 23.539625950964876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models like Flow Matching have achieved state-of-the-art performance but are often hindered by a computationally expensive iterative sampling process. To address this, recent work has focused on few-step or one-step generation by learning the average velocity field, which directly maps noise to data. MeanFlow, a leading method in this area, learns this field by enforcing a differential identity that connects the average and instantaneous velocities. In this work, we argue that this differential formulation is a limiting special case of a more fundamental principle. We return to the first principles of average velocity and leverage the additivity property of definite integrals. This leads us to derive a novel, purely algebraic identity we term Interval Splitting Consistency. This identity establishes a self-referential relationship for the average velocity field across different time intervals without resorting to any differential operators. Based on this principle, we introduce SplitMeanFlow, a new training framework that enforces this algebraic consistency directly as a learning objective. We formally prove that the differential identity at the core of MeanFlow is recovered by taking the limit of our algebraic consistency as the interval split becomes infinitesimal. This establishes SplitMeanFlow as a direct and more general foundation for learning average velocity fields. From a practical standpoint, our algebraic approach is significantly more efficient, as it eliminates the need for JVP computations, resulting in simpler implementation, more stable training, and broader hardware compatibility. One-step and two-step SplitMeanFlow models have been successfully deployed in large-scale speech synthesis products (such as Doubao), achieving speedups of 20x.
Related papers
- FlowConsist: Make Your Flow Consistent with Real Trajectory [99.22869983378062]
We argue that current fast-flow training paradigms suffer from two fundamental issues.<n> conditional velocities constructed from randomly paired noise-data samples introduce systematic trajectory drift.<n>We propose FlowConsist, a training framework designed to enforce trajectory consistency in fast flows.
arXiv Detail & Related papers (2026-02-06T03:24:23Z) - Generative Modeling with Continuous Flows: Sample Complexity of Flow Matching [60.37045080890305]
We provide the first analysis of the sample complexity for flow-matching based generative models.<n>We decompose the velocity field estimation error into neural-network approximation error, statistical error due to the finite sample size, and optimization error due to the finite number of optimization steps for estimating the velocity field.
arXiv Detail & Related papers (2025-12-01T05:14:25Z) - Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories [14.36205662558203]
Rectified MeanFlow is a framework that models the mean velocity field along the rectified trajectory using only a single reflow step.<n>Experiments on ImageNet at 64, 256, and 512 resolutions show that Re-MeanFlow consistently outperforms prior one-step flow distillation and Rectified Flow methods in both sample quality and training efficiency.
arXiv Detail & Related papers (2025-11-28T16:50:08Z) - Understanding, Accelerating, and Improving MeanFlow Training [64.84964628592418]
MeanFlow promises high-quality generative modeling in few steps, by jointly learning instantaneous and average velocity fields.<n>We analyze the interaction between the two velocities and find: (i) well-established instantaneous velocity is a prerequisite for learning average velocity.<n>We design an effective training scheme that accelerates the formation of instantaneous velocity, then shifts emphasis from short to long-interval average velocity.
arXiv Detail & Related papers (2025-11-24T12:59:27Z) - MeanFlowSE: one-step generative speech enhancement via conditional mean flow [13.437825847370442]
MeanFlowSE is a conditional generative model that learns the average velocity over finite intervals along a trajectory.<n>On VoiceBank-DEMAND, the single-step model achieves strong intelligibility, fidelity, and perceptual quality with substantially lower computational cost than multistep baselines.
arXiv Detail & Related papers (2025-09-18T11:24:47Z) - Towards High-Order Mean Flow Generative Models: Feasibility, Expressivity, and Provably Efficient Criteria [14.37317011636007]
We introduce a theoretical study on Second-Order MeanFlow, a novel extension that incorporates average acceleration fields into the MeanFlow objective.<n>We first establish the feasibility of our approach by proving that the average acceleration satisfies a generalized consistency condition analogous to first-order MeanFlow.<n>We then characterize its expressivity via circuit complexity analysis, showing that the Second-Order MeanFlow sampling process can be implemented by uniform threshold circuits.
arXiv Detail & Related papers (2025-08-09T21:10:58Z) - Contrastive Flow Matching [61.60002028726023]
We introduce Contrastive Flow Matching, an extension to the flow matching objective that explicitly enforces uniqueness across all conditional flows.<n>Our approach adds a contrastive objective that maximizes dissimilarities between predicted flows from arbitrary sample pairs.<n>We find that training models with Contrastive Flow Matching (1) improves training speed by a factor of up to 9x, (2) requires up to 5x fewer de-noising steps and (3) lowers FID by up to 8.9 compared to training the same models with flow matching.
arXiv Detail & Related papers (2025-06-05T17:59:58Z) - Mean Flows for One-step Generative Modeling [64.4997821467102]
We propose a principled and effective framework for one-step generative modeling.<n>A well-defined identity between average and instantaneous velocities is derived and used to guide neural network training.<n>Our method, termed the MeanFlow model, is self-contained and requires no pre-training, distillation, or curriculum learning.
arXiv Detail & Related papers (2025-05-19T17:59:42Z) - ProReflow: Progressive Reflow with Decomposed Velocity [52.249464542399636]
Flow matching aims to reflow the diffusion process of diffusion models into a straight line for a few-step and even one-step generation.<n>We introduce progressive reflow, which progressively reflows the diffusion models in local timesteps until the whole diffusion progresses.<n>We also introduce aligned v-prediction, which highlights the importance of direction matching in flow matching over magnitude matching.
arXiv Detail & Related papers (2025-03-05T04:50:53Z) - Towards Hierarchical Rectified Flow [18.976435451156824]
We formulate a hierarchical rectified flow to model data distributions.<n>We empirically defines a time-differentiable process that generates a data distribution from a known source distribution.<n>We verify this on synthetic 1D and 2D data as well as MNIST, CIFAR-10, and ImageNet-32 data.
arXiv Detail & Related papers (2025-02-24T18:59:55Z) - Phase-aware Training Schedule Simplifies Learning in Flow-Based Generative Models [0.1534667887016089]
We analyze the training of a two-layer autoencoder used to parameterize a flow-based generative model.<n>We find that the autoencoder representing the velocity field learns to simplify by estimating only the parameters relevant to each phase.
arXiv Detail & Related papers (2024-12-10T23:21:04Z) - Consistency Flow Matching: Defining Straight Flows with Velocity Consistency [97.28511135503176]
We introduce Consistency Flow Matching (Consistency-FM), a novel FM method that explicitly enforces self-consistency in the velocity field.
Preliminary experiments demonstrate that our Consistency-FM significantly improves training efficiency by converging 4.4x faster than consistency models.
arXiv Detail & Related papers (2024-07-02T16:15:37Z) - Improving Consistency Models with Generator-Augmented Flows [16.049476783301724]
Consistency models imitate the multi-step sampling of score-based diffusion in a single forward pass of a neural network.<n>They can be learned in two ways: consistency distillation and consistency training.<n>We propose a novel flow that transports noisy data towards their corresponding outputs derived from a consistency model.
arXiv Detail & Related papers (2024-06-13T20:22:38Z) - Nesterov Accelerated ADMM for Fast Diffeomorphic Image Registration [63.15453821022452]
Recent developments in approaches based on deep learning have achieved sub-second runtimes for DiffIR.
We propose a simple iterative scheme that functionally composes intermediate non-stationary velocity fields.
We then propose a convex optimisation model that uses a regularisation term of arbitrary order to impose smoothness on these velocity fields.
arXiv Detail & Related papers (2021-09-26T19:56:45Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.