ViewFusion: Towards Multi-View Consistency via Interpolated Denoising
- URL: http://arxiv.org/abs/2402.18842v1
- Date: Thu, 29 Feb 2024 04:21:38 GMT
- Title: ViewFusion: Towards Multi-View Consistency via Interpolated Denoising
- Authors: Xianghui Yang, Yan Zuo, Sameera Ramasinghe, Loris Bazzani, Gil
Avraham, Anton van den Hengel
- Abstract summary: We introduce ViewFusion, a training-free algorithm that can be seamlessly integrated into existing pre-trained diffusion models.
Our approach adopts an auto-regressive method that implicitly leverages previously generated views as context for the next view generation.
Our framework successfully extends single-view conditioned models to work in multiple-view conditional settings without any additional fine-tuning.
- Score: 48.02829400913904
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Novel-view synthesis through diffusion models has demonstrated remarkable
potential for generating diverse and high-quality images. Yet, the independent
process of image generation in these prevailing methods leads to challenges in
maintaining multiple-view consistency. To address this, we introduce
ViewFusion, a novel, training-free algorithm that can be seamlessly integrated
into existing pre-trained diffusion models. Our approach adopts an
auto-regressive method that implicitly leverages previously generated views as
context for the next view generation, ensuring robust multi-view consistency
during the novel-view generation process. Through a diffusion process that
fuses known-view information via interpolated denoising, our framework
successfully extends single-view conditioned models to work in multiple-view
conditional settings without any additional fine-tuning. Extensive experimental
results demonstrate the effectiveness of ViewFusion in generating consistent
and detailed novel views.
Related papers
- Merging and Splitting Diffusion Paths for Semantically Coherent Panoramas [33.334956022229846]
We propose the Merge-Attend-Diffuse operator, which can be plugged into different types of pretrained diffusion models used in a joint diffusion setting.
Specifically, we merge the diffusion paths, reprogramming self- and cross-attention to operate on the aggregated latent space.
Our method maintains compatibility with the input prompt and visual quality of the generated images while increasing their semantic coherence.
arXiv Detail & Related papers (2024-08-28T09:22:32Z) - ZePo: Zero-Shot Portrait Stylization with Faster Sampling [61.14140480095604]
This paper presents an inversion-free portrait stylization framework based on diffusion models that accomplishes content and style feature fusion in merely four sampling steps.
We propose a feature merging strategy to amalgamate redundant features in Consistency Features, thereby reducing the computational load of attention control.
arXiv Detail & Related papers (2024-08-10T08:53:41Z) - MultiDiff: Consistent Novel View Synthesis from a Single Image [60.04215655745264]
MultiDiff is a novel approach for consistent novel view synthesis of scenes from a single RGB image.
Our results demonstrate that MultiDiff outperforms state-of-the-art methods on the challenging, real-world datasets RealEstate10K and ScanNet.
arXiv Detail & Related papers (2024-06-26T17:53:51Z) - ViewFusion: Learning Composable Diffusion Models for Novel View
Synthesis [47.57948804514928]
This work introduces ViewFusion, a state-of-the-art end-to-end generative approach to novel view synthesis.
ViewFusion consists in simultaneously applying a diffusion denoising step to any number of input views of a scene.
arXiv Detail & Related papers (2024-02-05T11:22:14Z) - Training-Free Semantic Video Composition via Pre-trained Diffusion Model [96.0168609879295]
Current approaches, predominantly trained on videos with adjusted foreground color and lighting, struggle to address deep semantic disparities beyond superficial adjustments.
We propose a training-free pipeline employing a pre-trained diffusion model imbued with semantic prior knowledge.
Experimental results reveal that our pipeline successfully ensures the visual harmony and inter-frame coherence of the outputs.
arXiv Detail & Related papers (2024-01-17T13:07:22Z) - UpFusion: Novel View Diffusion from Unposed Sparse View Observations [66.36092764694502]
UpFusion can perform novel view synthesis and infer 3D representations for an object given a sparse set of reference images.
We show that this mechanism allows generating high-fidelity novel views while improving the synthesis quality given additional (unposed) images.
arXiv Detail & Related papers (2023-12-11T18:59:55Z) - EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion [60.30030562932703]
EpiDiff is a localized interactive multiview diffusion model.
It generates 16 multiview images in just 12 seconds.
It surpasses previous methods in quality evaluation metrics.
arXiv Detail & Related papers (2023-12-11T05:20:52Z) - Multi-View Unsupervised Image Generation with Cross Attention Guidance [23.07929124170851]
This paper introduces a novel pipeline for unsupervised training of a pose-conditioned diffusion model on single-category datasets.
We identify object poses by clustering the dataset through comparing visibility and locations of specific object parts.
Our model, MIRAGE, surpasses prior work in novel view synthesis on real images.
arXiv Detail & Related papers (2023-12-07T14:55:13Z) - On Conditioning the Input Noise for Controlled Image Generation with
Diffusion Models [27.472482893004862]
Conditional image generation has paved the way for several breakthroughs in image editing, generating stock photos and 3-D object generation.
In this work, we explore techniques to condition diffusion models with carefully crafted input noise artifacts.
arXiv Detail & Related papers (2022-05-08T13:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.