FlowBypass: Rectified Flow Trajectory Bypass for Training-Free Image Editing
- URL: http://arxiv.org/abs/2602.01805v1
- Date: Mon, 02 Feb 2026 08:37:00 GMT
- Title: FlowBypass: Rectified Flow Trajectory Bypass for Training-Free Image Editing
- Authors: Menglin Han, Zhangkai Ni,
- Abstract summary: Training-free image editing has attracted increasing attention for its efficiency and independence from training data.<n>Previous attempts to address this issue typically employ backbone-specific feature manipulations, limiting general applicability.<n>We propose FlowBypass, a novel and analytical framework grounded in Rectified Flow that constructs a bypass directly connecting inversion and reconstruction trajectories.
- Score: 10.304374060580828
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training-free image editing has attracted increasing attention for its efficiency and independence from training data. However, existing approaches predominantly rely on inversion-reconstruction trajectories, which impose an inherent trade-off: longer trajectories accumulate errors and compromise fidelity, while shorter ones fail to ensure sufficient alignment with the edit prompt. Previous attempts to address this issue typically employ backbone-specific feature manipulations, limiting general applicability. To address these challenges, we propose FlowBypass, a novel and analytical framework grounded in Rectified Flow that constructs a bypass directly connecting inversion and reconstruction trajectories, thereby mitigating error accumulation without relying on feature manipulations. We provide a formal derivation of two trajectories, from which we obtain an approximate bypass formulation and its numerical solution, enabling seamless trajectory transitions. Extensive experiments demonstrate that FlowBypass consistently outperforms state-of-the-art image editing methods, achieving stronger prompt alignment while preserving high-fidelity details in irrelevant regions.
Related papers
- FlowConsist: Make Your Flow Consistent with Real Trajectory [99.22869983378062]
We argue that current fast-flow training paradigms suffer from two fundamental issues.<n> conditional velocities constructed from randomly paired noise-data samples introduce systematic trajectory drift.<n>We propose FlowConsist, a training framework designed to enforce trajectory consistency in fast flows.
arXiv Detail & Related papers (2026-02-06T03:24:23Z) - On Exact Editing of Flow-Based Diffusion Models [97.0633397035926]
We propose Conditioned Velocity Correction (CVC) to reformulate flow-based editing as a distribution transformation problem driven by a known source prior.<n>CVC rethinks the role of velocity in inter-distribution transformation by introducing a dual-perspective velocity conversion mechanism.<n>We show that CVC consistently achieves superior fidelity, better semantic alignment, and more reliable editing behavior across diverse tasks.
arXiv Detail & Related papers (2025-12-30T06:29:20Z) - Learning Straight Flows: Variational Flow Matching for Efficient Generation [36.84747986070112]
Flow Matching has limited ability in achieving one-step generation due to its reliance on learned curved trajectories.<n>textbfS-VFM explicitly enforces trajectory straightness, ideally producing linear generation paths.
arXiv Detail & Related papers (2025-11-15T22:51:58Z) - SplitFlow: Flow Decomposition for Inversion-Free Text-to-Image Editing [15.234877788378563]
Rectified flow models have become a de facto standard in image generation due to their stable sampling trajectories and high-fidelity outputs.<n>Despite their strong generative capabilities, they face critical limitations in image editing tasks.<n>Recent efforts have attempted to directly map source and target distributions via ODE-based approaches without inversion.<n>We propose a flow decomposition-and-aggregation framework built upon an inversion-free formulation to address these limitations.
arXiv Detail & Related papers (2025-10-29T21:12:58Z) - Training-Free Reward-Guided Image Editing via Trajectory Optimal Control [55.64204232819136]
We introduce a novel framework for training-free, reward-guided image editing.<n>We demonstrate that our approach significantly outperforms existing inversion-based training-free baselines.
arXiv Detail & Related papers (2025-09-30T06:34:37Z) - FlowAlign: Trajectory-Regularized, Inversion-Free Flow-based Image Editing [47.908940130654535]
FlowAlign is an inversion-free flow-based framework for consistent image editing with optimal control-based trajectory control.<n>Our terminal point regularization is shown to balance semantic alignment with the edit prompt and structural consistency with the source image along the trajectory.<n>FlowAlign outperforms existing methods in both source preservation and editing controllability.
arXiv Detail & Related papers (2025-05-29T06:33:16Z) - FlowIE: Efficient Image Enhancement via Rectified Flow [71.6345505427213]
FlowIE is a flow-based framework that estimates straight-line paths from an elementary distribution to high-quality images.
Our contributions are rigorously validated through comprehensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2024-06-01T17:29:29Z) - Optimal Flow Matching: Learning Straight Trajectories in Just One Step [89.37027530300617]
We develop and theoretically justify the novel textbf Optimal Flow Matching (OFM) approach.
It allows recovering the straight OT displacement for the quadratic transport in just one FM step.
The main idea of our approach is the employment of vector field for FM which are parameterized by convex functions.
arXiv Detail & Related papers (2024-03-19T19:44:54Z) - Flow Straight and Fast: Learning to Generate and Transfer Data with
Rectified Flow [32.459587479351846]
We present rectified flow, a surprisingly simple approach to learning (neural) ordinary differential equation (ODE) models.
We show that rectified flow performs superbly on image generation, image-to-image translation, and domain adaptation.
arXiv Detail & Related papers (2022-09-07T08:59:55Z) - Continuous Transition: Improving Sample Efficiency for Continuous
Control Problems via MixUp [119.69304125647785]
This paper introduces a concise yet powerful method to construct Continuous Transition.
Specifically, we propose to synthesize new transitions for training by linearly interpolating the consecutive transitions.
To keep the constructed transitions authentic, we also develop a discriminator to guide the construction process automatically.
arXiv Detail & Related papers (2020-11-30T01:20:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.