3D-aware Conditional Image Synthesis
- URL: http://arxiv.org/abs/2302.08509v2
- Date: Mon, 1 May 2023 16:50:35 GMT
- Title: 3D-aware Conditional Image Synthesis
- Authors: Kangle Deng, Gengshan Yang, Deva Ramanan, Jun-Yan Zhu
- Abstract summary: pix2pix3D is a 3D-aware conditional generative model for controllable photorealistic image synthesis.
We build an interactive system that allows users to edit the label map from any viewpoint and generate outputs accordingly.
- Score: 76.68701564600998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose pix2pix3D, a 3D-aware conditional generative model for
controllable photorealistic image synthesis. Given a 2D label map, such as a
segmentation or edge map, our model learns to synthesize a corresponding image
from different viewpoints. To enable explicit 3D user control, we extend
conditional generative models with neural radiance fields. Given
widely-available monocular images and label map pairs, our model learns to
assign a label to every 3D point in addition to color and density, which
enables it to render the image and pixel-aligned label map simultaneously.
Finally, we build an interactive system that allows users to edit the label map
from any viewpoint and generate outputs accordingly.
Related papers
- Weakly-Supervised 3D Scene Graph Generation via Visual-Linguistic Assisted Pseudo-labeling [9.440800948514449]
We propose a weakly-supervised 3D scene graph generation method via Visual-Linguistic Assisted Pseudo-labeling.
Our 3D-VLAP exploits the superior ability of current large-scale visual-linguistic models to align the semantics between texts and 2D images.
We design an edge self-attention based graph neural network to generate scene graphs of 3D point cloud scenes.
arXiv Detail & Related papers (2024-04-03T07:30:09Z) - 3D Congealing: 3D-Aware Image Alignment in the Wild [44.254247801001675]
3D Congealing is a problem of 3D-aware alignment for 2D images capturing semantically similar objects.
We introduce a general framework that tackles the task without assuming shape templates, poses, or any camera parameters.
Our framework can be used for various tasks such as correspondence matching, pose estimation, and image editing.
arXiv Detail & Related papers (2024-04-02T17:32:12Z) - Learning Naturally Aggregated Appearance for Efficient 3D Editing [94.47518916521065]
We propose to replace the color field with an explicit 2D appearance aggregation, also called canonical image.
To avoid the distortion effect and facilitate convenient editing, we complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture lookup.
Our representation, dubbed AGAP, well supports various ways of 3D editing (e.g., stylization, interactive drawing, and content extraction) with no need of re-optimization.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - CC3D: Layout-Conditioned Generation of Compositional 3D Scenes [49.281006972028194]
We introduce CC3D, a conditional generative model that synthesizes complex 3D scenes conditioned on 2D semantic scene layouts.
Our evaluations on synthetic 3D-FRONT and real-world KITTI-360 datasets demonstrate that our model generates scenes of improved visual and geometric quality.
arXiv Detail & Related papers (2023-03-21T17:59:02Z) - 3D-TOGO: Towards Text-Guided Cross-Category 3D Object Generation [107.46972849241168]
3D-TOGO model generates 3D objects in the form of the neural radiance field with good texture.
Experiments on the largest 3D object dataset (i.e., ABO) are conducted to verify that 3D-TOGO can better generate high-quality 3D objects.
arXiv Detail & Related papers (2022-12-02T11:31:49Z) - A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware
Image Synthesis [163.96778522283967]
We propose a shading-guided generative implicit model that is able to learn a starkly improved shape representation.
An accurate 3D shape should also yield a realistic rendering under different lighting conditions.
Our experiments on multiple datasets show that the proposed approach achieves photorealistic 3D-aware image synthesis.
arXiv Detail & Related papers (2021-10-29T10:53:12Z) - Neural View Synthesis and Matching for Semi-Supervised Few-Shot Learning
of 3D Pose [10.028521796737314]
We study the problem of learning to estimate the 3D object pose from a few labelled examples and a collection of unlabelled data.
Our main contribution is a learning framework, neural view synthesis and matching, that can transfer the 3D pose annotation from the labelled to unlabelled images reliably.
arXiv Detail & Related papers (2021-10-27T06:53:53Z) - Realistic Image Synthesis with Configurable 3D Scene Layouts [59.872657806747576]
We propose a novel approach to realistic-looking image synthesis based on a 3D scene layout.
Our approach takes a 3D scene with semantic class labels as input and trains a 3D scene painting network.
With the trained painting network, realistic-looking images for the input 3D scene can be rendered and manipulated.
arXiv Detail & Related papers (2021-08-23T09:44:56Z) - AUTO3D: Novel view synthesis through unsupervisely learned variational
viewpoint and global 3D representation [27.163052958878776]
This paper targets on learning-based novel view synthesis from a single or limited 2D images without the pose supervision.
We construct an end-to-end trainable conditional variational framework to disentangle the unsupervisely learned relative-pose/rotation and implicit global 3D representation.
Our system can achieve implicitly 3D understanding without explicitly 3D reconstruction.
arXiv Detail & Related papers (2020-07-13T18:51:27Z) - Convolutional Generation of Textured 3D Meshes [34.20939983046376]
We propose a framework that can generate triangle meshes and associated high-resolution texture maps, using only 2D supervision from single-view natural images.
A key contribution of our work is the encoding of the mesh and texture as 2D representations, which are semantically aligned and can be easily modeled by a 2D convolutional GAN.
We demonstrate the efficacy of our method on Pascal3D+ Cars and CUB, both in an unconditional setting and in settings where the model is conditioned on class labels, attributes, and text.
arXiv Detail & Related papers (2020-06-13T15:23:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.