ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation
- URL: http://arxiv.org/abs/2602.09014v1
- Date: Mon, 09 Feb 2026 18:56:14 GMT
- Title: ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation
- Authors: Zihan Yang, Shuyuan Tu, Licheng Zhang, Qi Dai, Yu-Gang Jiang, Zuxuan Wu,
- Abstract summary: Diffusion models suffer from significant inference cost due to their reliance on sequential denoising steps.<n>ArcFlow is a few-step distillation framework that explicitly employs non-linear flow trajectories to approximate pre-trained teacher trajectories.<n>It achieves a 40x speedup with 2 NFEs over the original multi-step teachers without significant quality degradation.
- Score: 87.54456066636811
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have achieved remarkable generation quality, but they suffer from significant inference cost due to their reliance on multiple sequential denoising steps, motivating recent efforts to distill this inference process into a few-step regime. However, existing distillation methods typically approximate the teacher trajectory by using linear shortcuts, which makes it difficult to match its constantly changing tangent directions as velocities evolve across timesteps, thereby leading to quality degradation. To address this limitation, we propose ArcFlow, a few-step distillation framework that explicitly employs non-linear flow trajectories to approximate pre-trained teacher trajectories. Concretely, ArcFlow parameterizes the velocity field underlying the inference trajectory as a mixture of continuous momentum processes. This enables ArcFlow to capture velocity evolution and extrapolate coherent velocities to form a continuous non-linear trajectory within each denoising step. Importantly, this parameterization admits an analytical integration of this non-linear trajectory, which circumvents numerical discretization errors and results in high-precision approximation of the teacher trajectory. To train this parameterization into a few-step generator, we implement ArcFlow via trajectory distillation on pre-trained teacher models using lightweight adapters. This strategy ensures fast, stable convergence while preserving generative diversity and quality. Built on large-scale models (Qwen-Image-20B and FLUX.1-dev), ArcFlow only fine-tunes on less than 5% of original parameters and achieves a 40x speedup with 2 NFEs over the original multi-step teachers without significant quality degradation. Experiments on benchmarks show the effectiveness of ArcFlow both qualitatively and quantitatively.
Related papers
- Look-Ahead and Look-Back Flows: Training-Free Image Generation with Trajectory Smoothing [3.77130368225397]
Various training-free flow matching approaches have been developed to improve image generation through flow velocity field adjustment.<n>We propose two training-free trajectory smoothing schemes: emphLook-Ahead, which averages the current and next-step latents using a curvature-gated weight, and emphLook-Back, which smoothes latents using an exponential moving average with decay.
arXiv Detail & Related papers (2026-02-10T06:34:47Z) - FlowConsist: Make Your Flow Consistent with Real Trajectory [99.22869983378062]
We argue that current fast-flow training paradigms suffer from two fundamental issues.<n> conditional velocities constructed from randomly paired noise-data samples introduce systematic trajectory drift.<n>We propose FlowConsist, a training framework designed to enforce trajectory consistency in fast flows.
arXiv Detail & Related papers (2026-02-06T03:24:23Z) - Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories [14.36205662558203]
Rectified MeanFlow is a framework that models the mean velocity field along the rectified trajectory using only a single reflow step.<n>Experiments on ImageNet at 64, 256, and 512 resolutions show that Re-MeanFlow consistently outperforms prior one-step flow distillation and Rectified Flow methods in both sample quality and training efficiency.
arXiv Detail & Related papers (2025-11-28T16:50:08Z) - FlowSteer: Guiding Few-Step Image Synthesis with Authentic Trajectories [82.90132015584359]
ReFlow has theoretical consistency with flow matching but suboptimal performance in practical scenarios.<n>We propose FlowSteer, a method unlocks the potential of ReFlow-based distillation by guiding the student along teacher's authentic generation trajectories.
arXiv Detail & Related papers (2025-11-24T07:13:23Z) - Few-step Flow for 3D Generation via Marginal-Data Transport Distillation [104.76254102015794]
We propose a novel framework, MDT-dist, for few-step 3D flow distillation.<n>Our approach is built upon a primary objective: distilling the pretrained model to learn the Marginal-Data Transport.<n>Our method reduces sampling steps of each flow transformer from 25 to 1 or 2, achieving 0.68s (1 step x 2) and 0.94s (2 steps x 2) latency with 9.0x and 6.5x speedup on A800.
arXiv Detail & Related papers (2025-09-04T17:24:31Z) - Nesterov Method for Asynchronous Pipeline Parallel Optimization [59.79227116582264]
We introduce a variant of Nesterov Accelerated Gradient (NAG) for asynchronous optimization in Pipeline Parallelism.<n>Specifically, we modify the look-ahead step in NAG to effectively address the staleness in gradients.<n>We theoretically prove that our approach converges at a sublinear rate in the presence of fixed delay in gradients.
arXiv Detail & Related papers (2025-05-02T08:23:29Z) - HyperFlow: Gradient-Free Emulation of Few-Shot Fine-Tuning [20.308785668386424]
We propose an approach that emulates gradient descent without computing gradients, enabling efficient test-time adaptation.<n>Specifically, we formulate gradient descent as an Euler discretization of an ordinary differential equation (ODE) and train an auxiliary network to predict the task-conditional drift.<n>The adaptation then reduces to a simple numerical integration, which requires only a few forward passes of the auxiliary network.
arXiv Detail & Related papers (2025-04-21T03:04:38Z) - RayFlow: Instance-Aware Diffusion Acceleration via Adaptive Flow Trajectories [17.934379261227388]
Existing acceleration methods compromise sample quality, controllability, or introduce training complexities.<n>We propose RayFlow, a novel diffusion framework that addresses these limitations.<n>Extensive experiments demonstrate RayFlow's superiority in generating high-quality images with improved speed, control, and training efficiency.
arXiv Detail & Related papers (2025-03-10T17:20:52Z) - SCoT: Unifying Consistency Models and Rectified Flows via Straight-Consistent Trajectories [31.60548236936739]
We propose a Straight Consistent Trajectory(SCoT) model for pre-trained diffusion models.<n>SCoT enjoys the benefits of both approaches for fast sampling, producing trajectories with consistent and straight properties simultaneously.
arXiv Detail & Related papers (2025-02-24T08:57:19Z) - Optimal Flow Matching: Learning Straight Trajectories in Just One Step [89.37027530300617]
We develop and theoretically justify the novel textbf Optimal Flow Matching (OFM) approach.
It allows recovering the straight OT displacement for the quadratic transport in just one FM step.
The main idea of our approach is the employment of vector field for FM which are parameterized by convex functions.
arXiv Detail & Related papers (2024-03-19T19:44:54Z) - Deep Equilibrium Optical Flow Estimation [80.80992684796566]
Recent state-of-the-art (SOTA) optical flow models use finite-step recurrent update operations to emulate traditional algorithms.
These RNNs impose large computation and memory overheads, and are not directly trained to model such stable estimation.
We propose deep equilibrium (DEQ) flow estimators, an approach that directly solves for the flow as the infinite-level fixed point of an implicit layer.
arXiv Detail & Related papers (2022-04-18T17:53:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.