TC-Padé: Trajectory-Consistent Padé Approximation for Diffusion Acceleration
- URL: http://arxiv.org/abs/2603.02943v1
- Date: Tue, 03 Mar 2026 12:50:26 GMT
- Title: TC-Padé: Trajectory-Consistent Padé Approximation for Diffusion Acceleration
- Authors: Benlei Cui, Shaoxuan He, Bukun Huang, Zhizeng Ye, Yunyun Sun, Longtao Huang, Hui Xue, Yang Yang, Jingqun Tang, Zhou Zhao, Haiwen Hong,
- Abstract summary: Tray-Consistent Padé approximation captures transitional behaviors more accurately than Taylor-based methods.<n>Experiments show TC-Padé achieves 2.88x acceleration on FLUX.1-dev and 1.72x on Wan2.1 while maintaining high quality across FID, Aesthetic CLIP, and V-2.0 metrics.
- Score: 46.613183870351584
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite achieving state-of-the-art generation quality, diffusion models are hindered by the substantial computational burden of their iterative sampling process. While feature caching techniques achieve effective acceleration at higher step counts (e.g., 50 steps), they exhibit critical limitations in the practical low-step regime of 20-30 steps. As the interval between steps increases, polynomial-based extrapolators like TaylorSeer suffer from error accumulation and trajectory drift. Meanwhile, conventional caching strategies often overlook the distinct dynamical properties of different denoising phases. To address these challenges, we propose Trajectory-Consistent Padé approximation, a feature prediction framework grounded in Padé approximation. By modeling feature evolution through rational functions, our approach captures asymptotic and transitional behaviors more accurately than Taylor-based methods. To enable stable and trajectory-consistent sampling under reduced step counts, TC-Padé incorporates (1) adaptive coefficient modulation that leverages historical cached residuals to detect subtle trajectory transitions, and (2) step-aware prediction strategies tailored to the distinct dynamics of early, mid, and late sampling stages. Extensive experiments on DiT-XL/2, FLUX.1-dev, and Wan2.1 across both image and video generation demonstrate the effectiveness of TC-Padé. For instance, TC-Padé achieves 2.88x acceleration on FLUX.1-dev and 1.72x on Wan2.1 while maintaining high quality across FID, CLIP, Aesthetic, and VBench-2.0 metrics, substantially outperforming existing feature caching methods.
Related papers
- Function-Space Decoupled Diffusion for Forward and Inverse Modeling in Carbon Capture and Storage [65.51149575007149]
We present Fun-DDPS, a generative framework that combines function-space diffusion models with differentiable neural operator surrogates for both forward and inverse modeling.<n>Fun-DDPS produces physically consistent realizations free from the high-frequency artifacts observed in joint-state baselines.
arXiv Detail & Related papers (2026-02-12T18:58:12Z) - Dual-End Consistency Model [41.982957134224904]
Slow iterative sampling is a major bottleneck for the practical deployment of diffusion and flow-based generative models.<n>We propose a Dual-End Consistency Model (DE-CM) that selects vital sub-trajectory clusters to achieve stable and effective training.<n>Our method achieves a state-of-the-art FID score of 1.70 in one-step generation on the ImageNet 256x256 dataset, outperforming existing CM-based one-step approaches.
arXiv Detail & Related papers (2026-02-11T11:51:01Z) - FlowConsist: Make Your Flow Consistent with Real Trajectory [99.22869983378062]
We argue that current fast-flow training paradigms suffer from two fundamental issues.<n> conditional velocities constructed from randomly paired noise-data samples introduce systematic trajectory drift.<n>We propose FlowConsist, a training framework designed to enforce trajectory consistency in fast flows.
arXiv Detail & Related papers (2026-02-06T03:24:23Z) - Temporal Pair Consistency for Variance-Reduced Flow Matching [13.328987133593154]
Temporal Pair Consistency (TPC) is a lightweight variance-reduction principle that couples velocity predictions at paired timesteps along the same probability path.<n>Instantiated within flow matching, TPC improves sample quality and efficiency across CIFAR-10 and ImageNet at multiple resolutions.
arXiv Detail & Related papers (2026-02-04T00:05:21Z) - Unifying Sign and Magnitude for Optimizing Deep Vision Networks via ThermoLion [0.0]
Current paradigms impose a static compromise on information channel drift parameters.<n>We introduce a "low-dimensional" exploration model and a "low-dimensional" dynamic alignment framework.
arXiv Detail & Related papers (2025-12-01T17:04:17Z) - ETC: training-free diffusion models acceleration with Error-aware Trend Consistency [46.40478218579471]
Recent training-free methods accelerate diffusion process by reusing model outputs.<n>These methods ignore denoising trends and lack error control for model-specific tolerance.<n>We introduce Error-aware Trend Consistency (ETC), a framework that leverages the smooth continuity of diffusion trajectories.<n>ETC achieves a 2.65x acceleration over FLUX with negligible degradation of consistency.
arXiv Detail & Related papers (2025-10-28T07:08:09Z) - Transition Models: Rethinking the Generative Learning Objective [68.16330673177207]
We introduce a continuous-time dynamics equation that analytically defines state transitions across any finite time interval.<n>This leads to a novel generative paradigm, Transition Models (TiM), which adapt to arbitrary-step transitions.<n>TiM achieves state-of-the-art performance, surpassing leading models such as SD3.5 (8B parameters) and FLUX.1 (12B parameters) across all evaluated step counts.
arXiv Detail & Related papers (2025-09-04T17:05:59Z) - Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion [56.38386580040991]
Consistency Trajectory Model (CTM) is a generalization of Consistency Models (CM)
CTM enables the efficient combination of adversarial training and denoising score matching loss to enhance performance.
Unlike CM, CTM's access to the score function can streamline the adoption of established controllable/conditional generation methods.
arXiv Detail & Related papers (2023-10-01T05:07:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.