RayFlow: Instance-Aware Diffusion Acceleration via Adaptive Flow Trajectories
- URL: http://arxiv.org/abs/2503.07699v2
- Date: Tue, 25 Mar 2025 06:11:23 GMT
- Title: RayFlow: Instance-Aware Diffusion Acceleration via Adaptive Flow Trajectories
- Authors: Huiyang Shao, Xin Xia, Yuhong Yang, Yuxi Ren, Xing Wang, Xuefeng Xiao,
- Abstract summary: Existing acceleration methods compromise sample quality, controllability, or introduce training complexities.<n>We propose RayFlow, a novel diffusion framework that addresses these limitations.<n>Extensive experiments demonstrate RayFlow's superiority in generating high-quality images with improved speed, control, and training efficiency.
- Score: 17.934379261227388
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have achieved remarkable success across various domains. However, their slow generation speed remains a critical challenge. Existing acceleration methods, while aiming to reduce steps, often compromise sample quality, controllability, or introduce training complexities. Therefore, we propose RayFlow, a novel diffusion framework that addresses these limitations. Unlike previous methods, RayFlow guides each sample along a unique path towards an instance-specific target distribution. This method minimizes sampling steps while preserving generation diversity and stability. Furthermore, we introduce Time Sampler, an importance sampling technique to enhance training efficiency by focusing on crucial timesteps. Extensive experiments demonstrate RayFlow's superiority in generating high-quality images with improved speed, control, and training efficiency compared to existing acceleration techniques.
Related papers
- Analyzing and Improving Fast Sampling of Text-to-Image Diffusion Models [32.70019265781621]
Text-to-image diffusion models have achieved unprecedented success but still struggle to produce high-quality images under limited sampling budgets.<n>We propose constant total rotation schedule (TORS) as a scheduling strategy that ensures uniform geometric variation along the sampling trajectory.<n>TORS outperforms previous training-free acceleration methods and produces high-quality images with 10 sampling steps on Flux.1-Dev and Stable Diffusion 3.5.
arXiv Detail & Related papers (2026-02-28T18:09:44Z) - ArcFlow: Unleashing 2-Step Text-to-Image Generation via High-Precision Non-Linear Flow Distillation [87.54456066636811]
Diffusion models suffer from significant inference cost due to their reliance on sequential denoising steps.<n>ArcFlow is a few-step distillation framework that explicitly employs non-linear flow trajectories to approximate pre-trained teacher trajectories.<n>It achieves a 40x speedup with 2 NFEs over the original multi-step teachers without significant quality degradation.
arXiv Detail & Related papers (2026-02-09T18:56:14Z) - FlowConsist: Make Your Flow Consistent with Real Trajectory [99.22869983378062]
We argue that current fast-flow training paradigms suffer from two fundamental issues.<n> conditional velocities constructed from randomly paired noise-data samples introduce systematic trajectory drift.<n>We propose FlowConsist, a training framework designed to enforce trajectory consistency in fast flows.
arXiv Detail & Related papers (2026-02-06T03:24:23Z) - Flow Straighter and Faster: Efficient One-Step Generative Modeling via MeanFlow on Rectified Trajectories [14.36205662558203]
Rectified MeanFlow is a framework that models the mean velocity field along the rectified trajectory using only a single reflow step.<n>Experiments on ImageNet at 64, 256, and 512 resolutions show that Re-MeanFlow consistently outperforms prior one-step flow distillation and Rectified Flow methods in both sample quality and training efficiency.
arXiv Detail & Related papers (2025-11-28T16:50:08Z) - Transport Based Mean Flows for Generative Modeling [19.973366424307077]
Flow-matching generative models have emerged as a powerful paradigm for continuous data generation.<n>These models suffer from slow inference due to the requirement of numerous sequential sampling steps.<n>Recent work has sought to accelerate inference by reducing the number of sampling steps.
arXiv Detail & Related papers (2025-09-26T17:12:19Z) - A-FloPS: Accelerating Diffusion Sampling with Adaptive Flow Path Sampler [21.134678093577193]
A-FloPS is a principled, training-free framework for flow-based generative models.<n>We show that A-FloPS consistently outperforms state-of-the-art training-free samplers in both sample quality and efficiency.<n>With as few as $5$ function evaluations, A-FloPS achieves substantially lower FID and generates sharper, more coherent images.
arXiv Detail & Related papers (2025-08-22T13:28:16Z) - Align Your Flow: Scaling Continuous-Time Flow Map Distillation [63.927438959502226]
Flow maps connect any two noise levels in a single step and remain effective across all step counts.<n>We extensively validate our flow map models, called Align Your Flow, on challenging image generation benchmarks.<n>We show text-to-image flow map models that outperform all existing non-adversarially trained few-step samplers in text-conditioned synthesis.
arXiv Detail & Related papers (2025-06-17T15:06:07Z) - Rectified Flows for Fast Multiscale Fluid Flow Modeling [11.597597438962026]
We introduce a rectified flow framework that learns a time-dependent velocity field.<n>Our method makes each integration step much more effective, using as few as eight steps.<n> Experiments on challenging multiscale flow benchmarks show that rectified flows recover the same posterior distributions as diffusion models.
arXiv Detail & Related papers (2025-06-03T17:40:39Z) - Quantizing Diffusion Models from a Sampling-Aware Perspective [43.95032520555463]
We propose a sampling-aware quantization strategy, wherein a Mixed-Order Trajectory Alignment technique is devised.<n>Experiments on sparse-step fast sampling across multiple datasets demonstrate that our approach preserves the rapid convergence characteristics of high-speed samplers.
arXiv Detail & Related papers (2025-05-04T20:50:44Z) - One-Step Diffusion Model for Image Motion-Deblurring [85.76149042561507]
We propose a one-step diffusion model for deblurring (OSDD), a novel framework that reduces the denoising process to a single step.
To tackle fidelity loss in diffusion models, we introduce an enhanced variational autoencoder (eVAE), which improves structural restoration.
Our method achieves strong performance on both full and no-reference metrics.
arXiv Detail & Related papers (2025-03-09T09:39:57Z) - Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training [4.760537994346813]
As data distributions grow more complex, training diffusion models to convergence becomes increasingly intensive.
We introduce a non-uniform timestep sampling method that prioritizes these more critical timesteps.
Our method shows robust performance across various datasets, scheduling strategies, and diffusion architectures.
arXiv Detail & Related papers (2024-11-15T07:12:18Z) - FlowTurbo: Towards Real-time Flow-Based Image Generation with Velocity Refiner [70.90505084288057]
Flow-based models tend to produce a straighter sampling trajectory during the sampling process.
We introduce several techniques including a pseudo corrector and sample-aware compilation to further reduce inference time.
FlowTurbo reaches an FID of 2.12 on ImageNet with 100 (ms / img) and FID of 3.93 with 38 (ms / img)
arXiv Detail & Related papers (2024-09-26T17:59:51Z) - FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - A Closer Look at Time Steps is Worthy of Triple Speed-Up for Diffusion Model Training [53.93563224892207]
We introduce a novel speed-up method for diffusion model training, called, which is based on a closer look at time steps.
As a plug-and-play and architecture-agnostic approach, SpeeD consistently achieves 3-times acceleration across various diffusion architectures, datasets, and tasks.
arXiv Detail & Related papers (2024-05-27T17:51:36Z) - PeRFlow: Piecewise Rectified Flow as Universal Plug-and-Play Accelerator [73.80050807279461]
Piecewise Rectified Flow (PeRFlow) is a flow-based method for accelerating diffusion models.
PeRFlow achieves superior performance in a few-step generation.
arXiv Detail & Related papers (2024-05-13T07:10:53Z) - Efficient Diffusion Model for Image Restoration by Residual Shifting [63.02725947015132]
This study proposes a novel and efficient diffusion model for image restoration.
Our method avoids the need for post-acceleration during inference, thereby avoiding the associated performance deterioration.
Our method achieves superior or comparable performance to current state-of-the-art methods on three classical IR tasks.
arXiv Detail & Related papers (2024-03-12T05:06:07Z) - Fast Sampling via Discrete Non-Markov Diffusion Models with Predetermined Transition Time [49.598085130313514]
We propose discrete non-Markov diffusion models (DNDM), which naturally induce the predetermined transition time set.<n>This enables a training-free sampling algorithm that significantly reduces the number of function evaluations.<n>We study the transition from finite to infinite step sampling, offering new insights into bridging the gap between discrete and continuous-time processes.
arXiv Detail & Related papers (2023-12-14T18:14:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.