ReMatching Dynamic Reconstruction Flow
- URL: http://arxiv.org/abs/2411.00705v1
- Date: Fri, 01 Nov 2024 16:09:33 GMT
- Title: ReMatching Dynamic Reconstruction Flow
- Authors: Sara Oblak, Despoina Paschalidou, Sanja Fidler, Matan Atzmon,
- Abstract summary: We introduce the ReMatching framework, designed to improve generalization quality by incorporating deformation priors into dynamic reconstruction models.
The framework is highly adaptable and can be applied to various dynamic representations.
Our evaluations on popular benchmarks involving both synthetic and real-world dynamic scenes demonstrate a clear improvement in reconstruction accuracy of current state-of-the-art models.
- Score: 55.272357926111454
- License:
- Abstract: Reconstructing dynamic scenes from image inputs is a fundamental computer vision task with many downstream applications. Despite recent advancements, existing approaches still struggle to achieve high-quality reconstructions from unseen viewpoints and timestamps. This work introduces the ReMatching framework, designed to improve generalization quality by incorporating deformation priors into dynamic reconstruction models. Our approach advocates for velocity-field-based priors, for which we suggest a matching procedure that can seamlessly supplement existing dynamic reconstruction pipelines. The framework is highly adaptable and can be applied to various dynamic representations. Moreover, it supports integrating multiple types of model priors and enables combining simpler ones to create more complex classes. Our evaluations on popular benchmarks involving both synthetic and real-world dynamic scenes demonstrate a clear improvement in reconstruction accuracy of current state-of-the-art models.
Related papers
- TFS-NeRF: Template-Free NeRF for Semantic 3D Reconstruction of Dynamic Scene [25.164085646259856]
This paper introduces a 3D semantic NeRF for dynamic scenes captured from sparse or singleview RGB videos.
Our framework uses an Invertible Neural Network (INN) for LBS prediction, the training process.
Our approach produces high-quality reconstructions of both deformable and non-deformable objects in complex interactions.
arXiv Detail & Related papers (2024-09-26T01:34:42Z) - Simultaneous Map and Object Reconstruction [66.66729715211642]
We present a method for dynamic surface reconstruction of large-scale urban scenes from LiDAR.
We take inspiration from recent novel view synthesis methods and pose the reconstruction problem as a global optimization.
By careful modeling of continuous-time motion, our reconstructions can compensate for the rolling shutter effects of rotating LiDAR sensors.
arXiv Detail & Related papers (2024-06-19T23:53:31Z) - Enhanced Event-Based Video Reconstruction with Motion Compensation [26.03328887451797]
We propose warping the input intensity frames and sparse codes to enhance reconstruction quality.
A CISTA-Flow network is constructed by integrating a flow network with CISTA-LSTC for motion compensation.
Results demonstrate that our approach achieves state-of-the-art reconstruction accuracy and simultaneously provides reliable dense flow estimation.
arXiv Detail & Related papers (2024-03-18T16:58:23Z) - SceNeRFlow: Time-Consistent Reconstruction of General Dynamic Scenes [75.9110646062442]
We propose SceNeRFlow to reconstruct a general, non-rigid scene in a time-consistent manner.
Our method takes multi-view RGB videos and background images from static cameras with known camera parameters as input.
We show experimentally that, unlike prior work that only handles small motion, our method enables the reconstruction of studio-scale motions.
arXiv Detail & Related papers (2023-08-16T09:50:35Z) - IRGen: Generative Modeling for Image Retrieval [82.62022344988993]
In this paper, we present a novel methodology, reframing image retrieval as a variant of generative modeling.
We develop our model, dubbed IRGen, to address the technical challenge of converting an image into a concise sequence of semantic units.
Our model achieves state-of-the-art performance on three widely-used image retrieval benchmarks and two million-scale datasets.
arXiv Detail & Related papers (2023-03-17T17:07:36Z) - RRSR:Reciprocal Reference-based Image Super-Resolution with Progressive
Feature Alignment and Selection [66.08293086254851]
We propose a reciprocal learning framework to reinforce the learning of a RefSR network.
The newly proposed module aligns reference-input images at multi-scale feature spaces and performs reference-aware feature selection.
We empirically show that multiple recent state-of-the-art RefSR models can be consistently improved with our reciprocal learning paradigm.
arXiv Detail & Related papers (2022-11-08T12:39:35Z) - Insights from Generative Modeling for Neural Video Compression [31.59496634465347]
We present newly proposed neural video coding algorithms through the lens of deep autoregressive and latent variable modeling.
We propose several architectures that yield state-of-the-art video compression performance on high-resolution video.
We provide further evidence that the generative modeling viewpoint can advance the neural video coding field.
arXiv Detail & Related papers (2021-07-28T02:19:39Z) - Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model [58.17021225930069]
We explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA)
We propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly.
Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works.
arXiv Detail & Related papers (2021-05-31T16:20:03Z) - Improving Sequential Latent Variable Models with Autoregressive Flows [30.053464816814348]
We propose an approach for improving sequence modeling based on autoregressive normalizing flows.
Results are presented on three benchmark video datasets, where autoregressive flow-based dynamics improve log-likelihood performance.
arXiv Detail & Related papers (2020-10-07T05:14:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.