FLOWING: Implicit Neural Flows for Structure-Preserving Morphing
- URL: http://arxiv.org/abs/2510.09537v1
- Date: Fri, 10 Oct 2025 16:50:23 GMT
- Title: FLOWING: Implicit Neural Flows for Structure-Preserving Morphing
- Authors: Arthur Bizzi, Matias Grynberg, Vitor Matias, Daniel Perazzo, João Paulo Lima, Luiz Velho, Nuno Gonçalves, João Pereira, Guilherme Schardong, Tiago Novello,
- Abstract summary: FLOWING (FLOW morphING) is a framework that recasts warping as the construction of a differential vector flow.<n>We show that FLOWING achieves state-of-the-art morphing quality with faster convergence.
- Score: 5.498230316788923
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Morphing is a long-standing problem in vision and computer graphics, requiring a time-dependent warping for feature alignment and a blending for smooth interpolation. Recently, multilayer perceptrons (MLPs) have been explored as implicit neural representations (INRs) for modeling such deformations, due to their meshlessness and differentiability; however, extracting coherent and accurate morphings from standard MLPs typically relies on costly regularizations, which often lead to unstable training and prevent effective feature alignment. To overcome these limitations, we propose FLOWING (FLOW morphING), a framework that recasts warping as the construction of a differential vector flow, naturally ensuring continuity, invertibility, and temporal coherence by encoding structural flow properties directly into the network architectures. This flow-centric approach yields principled and stable transformations, enabling accurate and structure-preserving morphing of both 2D images and 3D shapes. Extensive experiments across a range of applications - including face and image morphing, as well as Gaussian Splatting morphing - show that FLOWING achieves state-of-the-art morphing quality with faster convergence. Code and pretrained models are available at http://schardong.github.io/flowing.
Related papers
- ACFormer: Mitigating Non-linearity with Auto Convolutional Encoder for Time Series Forecasting [6.27761817493579]
Time series forecasting (TSF) faces challenges in modeling complex intra-channel temporal dependencies and inter-channel correlations.<n>We propose ACFormer, an architecture designed to reconcile the efficiency of linear projections with the non-linear feature-extraction power of convolutions.
arXiv Detail & Related papers (2026-01-28T13:47:54Z) - RenderFlow: Single-Step Neural Rendering via Flow Matching [17.56739408578129]
We present a novel end-to-end, deterministic, single-step neural rendering framework, RenderFlow, built upon a flow matching paradigm.<n>Our method significantly accelerates rendering process and enhances both the physical plausibility and overall visual quality of the output.<n>The resulting pipeline achieves near real-time performance with photorealistic rendering quality, effectively bridging the gap between the efficiency of modern generative models and the precision of traditional physically based rendering.
arXiv Detail & Related papers (2026-01-11T14:28:46Z) - Error-Propagation-Free Learned Video Compression With Dual-Domain Progressive Temporal Alignment [92.57576987521107]
We propose a novel unifiedtransform framework with dual-domain progressive temporal alignment and quality-conditioned mixture-of-expert (QCMoE)<n>QCMoE allows continuous and consistent rate control with appealing R-D performance.<n> Experimental results show that the proposed method achieves competitive R-D performance compared with the state-of-the-arts.
arXiv Detail & Related papers (2025-12-11T09:14:51Z) - Wukong's 72 Transformations: High-fidelity Textured 3D Morphing via Flow Models [33.80986417412425]
WUKONG is a training-free framework for high-fidelity textured 3D morphing.<n>We exploit the inherent continuity of flow-based generative processes.<n>We propose a similarity-guided semantic consistency mechanism.
arXiv Detail & Related papers (2025-11-27T13:03:57Z) - Morphing Through Time: Diffusion-Based Bridging of Temporal Gaps for Robust Alignment in Change Detection [51.56484100374058]
We introduce a modular pipeline that improves spatial and temporal robustness without altering existing change detection networks.<n>A diffusion module synthesizes intermediate morphing frames that bridge large appearance gaps, enabling RoMa to estimate stepwise correspondences.<n>Experiments on LEVIR-CD, WHU-CD, and DSIFN-CD show consistent gains in both registration accuracy and downstream change detection.
arXiv Detail & Related papers (2025-11-11T08:40:28Z) - Solving Inverse Problems with FLAIR [59.02385492199431]
Flow-based latent generative models are able to generate images with remarkable quality, even enabling text-to-image generation.<n>We present FLAIR, a novel training free variational framework that leverages flow-based generative models as a prior for inverse problems.<n>Results on standard imaging benchmarks demonstrate that FLAIR consistently outperforms existing diffusion- and flow-based methods in terms of reconstruction quality and sample diversity.
arXiv Detail & Related papers (2025-06-03T09:29:47Z) - PolypFlow: Reinforcing Polyp Segmentation with Flow-Driven Dynamics [25.69584903128262]
PolypFLow is a flow-matching enhanced architecture that injects physics-inspired optimization dynamics into segmentation refinement.<n>We show that PolypFLow achieves a state-of-the-art while maintaining consistent performance in different lighting scenarios.
arXiv Detail & Related papers (2025-02-26T10:48:33Z) - SRIF: Semantic Shape Registration Empowered by Diffusion-based Image Morphing and Flow Estimation [2.336821026049481]
We propose SRIF, a novel Semantic shape Registration framework based on diffusion-based Image morphing and flow estimation.
SRIF achieves high-quality dense correspondences on challenging shape pairs, but also delivers smooth, semantically meaningful in between.
arXiv Detail & Related papers (2024-09-18T03:47:24Z) - Motion-Aware Video Frame Interpolation [49.49668436390514]
We introduce a Motion-Aware Video Frame Interpolation (MA-VFI) network, which directly estimates intermediate optical flow from consecutive frames.
It not only extracts global semantic relationships and spatial details from input frames with different receptive fields, but also effectively reduces the required computational cost and complexity.
arXiv Detail & Related papers (2024-02-05T11:00:14Z) - Vision-Informed Flow Image Super-Resolution with Quaternion Spatial
Modeling and Dynamic Flow Convolution [49.45309818782329]
Flow image super-resolution (FISR) aims at recovering high-resolution turbulent velocity fields from low-resolution flow images.
Existing FISR methods mainly process the flow images in natural image patterns.
We propose the first flow visual property-informed FISR algorithm.
arXiv Detail & Related papers (2024-01-29T06:48:16Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - Image Morphing with Perceptual Constraints and STN Alignment [70.38273150435928]
We propose a conditional GAN morphing framework operating on a pair of input images.
A special training protocol produces sequences of frames, combined with a perceptual similarity loss, promote smooth transformation over time.
We provide comparisons to classic as well as latent space morphing techniques, and demonstrate that, given a set of images for self-supervision, our network learns to generate visually pleasing morphing effects.
arXiv Detail & Related papers (2020-04-29T10:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.