Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks
- URL: http://arxiv.org/abs/2312.16218v2
- Date: Fri, 5 Jan 2024 10:18:19 GMT
- Title: Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks
- Authors: Christian Simon, Sen He, Juan-Manuel Perez-Rua, Mengmeng Xu, Amine
Benhalloum, Tao Xiang
- Abstract summary: We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
- Score: 53.67497327319569
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving image-to-3D from a single view is an ill-posed problem, and current
neural reconstruction methods addressing it through diffusion models still rely
on scene-specific optimization, constraining their generalization capability.
To overcome the limitations of existing approaches regarding generalization and
consistency, we introduce a novel neural rendering technique. Our approach
employs the signed distance function as the surface representation and
incorporates generalizable priors through geometry-encoding volumes and
HyperNetworks. Specifically, our method builds neural encoding volumes from
generated multi-view inputs. We adjust the weights of the SDF network
conditioned on an input image at test-time to allow model adaptation to novel
scenes in a feed-forward manner via HyperNetworks. To mitigate artifacts
derived from the synthesized views, we propose the use of a volume transformer
module to improve the aggregation of image features instead of processing each
viewpoint separately. Through our proposed method, dubbed as Hyper-VolTran, we
avoid the bottleneck of scene-specific optimization and maintain consistency
across the images generated from multiple viewpoints. Our experiments show the
advantages of our proposed approach with consistent results and rapid
generation.
Related papers
- Pixel-Aligned Multi-View Generation with Depth Guided Decoder [86.1813201212539]
We propose a novel method for pixel-level image-to-multi-view generation.
Unlike prior work, we incorporate attention layers across multi-view images in the VAE decoder of a latent video diffusion model.
Our model enables better pixel alignment across multi-view images.
arXiv Detail & Related papers (2024-08-26T04:56:41Z) - GenS: Generalizable Neural Surface Reconstruction from Multi-View Images [20.184657468900852]
GenS is an end-to-end generalizable neural surface reconstruction model.
Our representation is more powerful, which can recover high-frequency details while maintaining global smoothness.
Experiments on popular benchmarks show that our model can generalize well to new scenes.
arXiv Detail & Related papers (2024-06-04T17:13:10Z) - HyperPlanes: Hypernetwork Approach to Rapid NeRF Adaptation [4.53411151619456]
We propose a few-shot learning approach based on the hypernetwork paradigm that does not require gradient optimization during inference.
We have developed an efficient method for generating a high-quality 3D object representation from a small number of images in a single step.
arXiv Detail & Related papers (2024-02-02T16:10:29Z) - InvertAvatar: Incremental GAN Inversion for Generalized Head Avatars [40.10906393484584]
We propose a novel framework that enhances avatar reconstruction performance using an algorithm designed to increase the fidelity from multiple frames.
Our architecture emphasizes pixel-aligned image-to-image translation, mitigating the need to learn correspondences between observation and canonical spaces.
The proposed paradigm demonstrates state-of-the-art performance on one-shot and few-shot avatar animation tasks.
arXiv Detail & Related papers (2023-12-03T18:59:15Z) - Distance Weighted Trans Network for Image Completion [52.318730994423106]
We propose a new architecture that relies on Distance-based Weighted Transformer (DWT) to better understand the relationships between an image's components.
CNNs are used to augment the local texture information of coarse priors.
DWT blocks are used to recover certain coarse textures and coherent visual structures.
arXiv Detail & Related papers (2023-10-11T12:46:11Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - Extracting Triangular 3D Models, Materials, and Lighting From Images [59.33666140713829]
We present an efficient method for joint optimization of materials and lighting from multi-view image observations.
We leverage meshes with spatially-varying materials and environment that can be deployed in any traditional graphics engine.
arXiv Detail & Related papers (2021-11-24T13:58:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.