Graphical Residual Flows
- URL: http://arxiv.org/abs/2204.11846v1
- Date: Sat, 23 Apr 2022 09:57:57 GMT
- Title: Graphical Residual Flows
- Authors: Jacobie Mouton and Steve Kroon
- Abstract summary: This work introduces graphical residual flows, a graphical flow based on invertible residual networks.
Our approach to incorporating dependency information in the flow, means that we are able to calculate the Jacobian determinant of these flows exactly.
- Score: 2.8597160727750564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graphical flows add further structure to normalizing flows by encoding
non-trivial variable dependencies. Previous graphical flow models have focused
primarily on a single flow direction: the normalizing direction for density
estimation, or the generative direction for inference. However, to use a single
flow to perform tasks in both directions, the model must exhibit stable and
efficient flow inversion. This work introduces graphical residual flows, a
graphical flow based on invertible residual networks. Our approach to
incorporating dependency information in the flow, means that we are able to
calculate the Jacobian determinant of these flows exactly. Our experiments
confirm that graphical residual flows provide stable and accurate inversion
that is also more time-efficient than alternative flows with similar task
performance. Furthermore, our model provides performance competitive with other
graphical flows for both density estimation and inference tasks.
Related papers
- Trajectory Stitching for Solving Inverse Problems with Flow-Based Models [68.36374645801901]
Flow-based generative models have emerged as powerful priors for solving inverse problems.<n>We propose MS-Flow, which represents the trajectory as a sequence of intermediate latent states rather than a single initial code.<n>We demonstrate the effectiveness of MS-Flow over existing methods on image recovery and inverse problems, including inpainting, super-resolution, and computed tomography.
arXiv Detail & Related papers (2026-02-09T11:36:41Z) - FlowSteer: Conditioning Flow Field for Consistent Image Restoration [29.10704687691786]
Flow-based text-to-image (T2I) models excel at prompt-driven image generation, but falter on Image Restoration (IR)<n>We introduce FlowSteer (FS), an operator-aware conditioning scheme that injects measurement priors along the sampling path.<n>FS improves measurement consistency and identity preservation in a strictly zero-shot setting-no retrained models, no adapters.
arXiv Detail & Related papers (2025-12-09T00:09:21Z) - Latent Refinement via Flow Matching for Training-free Linear Inverse Problem Solving [18.226350407462643]
We propose LFlow, a training-free framework for solving linear inverse problems via pretrained latent flow priors.<n>Our proposed method outperforms state-of-the-art latent diffusion solvers in reconstruction quality across most tasks.
arXiv Detail & Related papers (2025-11-08T21:20:59Z) - Rethinking Unsupervised Cross-modal Flow Estimation: Learning from Decoupled Optimization and Consistency Constraint [20.46870753632375]
DCFlow is a novel unsupervised cross-modal flow estimation framework.<n>We introduce a decoupled optimization strategy with task-specific supervision to address modality discrepancy and geometric misalignment distinctly.<n>For evaluation, we construct a comprehensive cross-modal flow benchmark by repurposing public datasets.
arXiv Detail & Related papers (2025-09-29T08:10:41Z) - FlowDPS: Flow-Driven Posterior Sampling for Inverse Problems [51.99765487172328]
Posterior sampling for inverse problem solving can be effectively achieved using flows.
Flow-Driven Posterior Sampling (FlowDPS) outperforms state-of-the-art alternatives.
arXiv Detail & Related papers (2025-03-11T07:56:14Z) - D-Flow: Differentiating through Flows for Controlled Generation [37.80603174399585]
We introduce D-Flow, a framework for controlling the generation process by differentiating through the flow.
We motivate this framework by our key observation stating that for Diffusion/FM models trained with Gaussian probability paths, differentiating through the generation process projects gradient on the data manifold.
We validate our framework on linear and non-linear controlled generation problems including: image and audio inverse problems and conditional molecule generation reaching state of the art performance across all.
arXiv Detail & Related papers (2024-02-21T18:56:03Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - DistractFlow: Improving Optical Flow Estimation via Realistic
Distractions and Pseudo-Labeling [49.46842536813477]
We propose a novel data augmentation approach, DistractFlow, for training optical flow estimation models.
We combine one of the frames in the pair with a distractor image depicting a similar domain, which allows for inducing visual perturbations congruent with natural objects and scenes.
Our approach allows increasing the number of available training pairs significantly without requiring additional annotations.
arXiv Detail & Related papers (2023-03-24T15:42:54Z) - Flow Guidance Deformable Compensation Network for Video Frame
Interpolation [33.106776459443275]
We propose a flow guidance deformable compensation network (FGDCN) to overcome the drawbacks of existing motion-based methods.
FGDCN decomposes the frame sampling process into two steps: a flow step and a deformation step.
Experimental results show that the proposed algorithm achieves excellent performance on various datasets with fewer parameters.
arXiv Detail & Related papers (2022-11-22T09:35:14Z) - GMFlow: Learning Optical Flow via Global Matching [124.57850500778277]
We propose a GMFlow framework for learning optical flow estimation.
It consists of three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for global feature matching, and a self-attention layer for flow propagation.
Our new framework outperforms 32-iteration RAFT's performance on the challenging Sintel benchmark.
arXiv Detail & Related papers (2021-11-26T18:59:56Z) - Generative Flows with Invertible Attentions [135.23766216657745]
We introduce two types of invertible attention mechanisms for generative flow models.
We exploit split-based attention mechanisms to learn the attention weights and input representations on every two splits of flow feature maps.
Our method provides invertible attention modules with tractable Jacobian determinants, enabling seamless integration of it at any positions of the flow-based models.
arXiv Detail & Related papers (2021-06-07T20:43:04Z) - Self-Supervised Learning of Non-Rigid Residual Flow and Ego-Motion [63.18340058854517]
We present an alternative method for end-to-end scene flow learning by joint estimation of non-rigid residual flow and ego-motion flow for dynamic 3D scenes.
We extend the supervised framework with self-supervisory signals based on the temporal consistency property of a point cloud sequence.
arXiv Detail & Related papers (2020-09-22T11:39:19Z) - What Matters in Unsupervised Optical Flow [51.45112526506455]
We compare and analyze a set of key components in unsupervised optical flow.
We construct a number of novel improvements to unsupervised flow models.
We present a new unsupervised flow technique that significantly outperforms the previous state-of-the-art.
arXiv Detail & Related papers (2020-06-08T19:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.