Neural Implicit Field Editing Considering Object-environment Interaction
- URL: http://arxiv.org/abs/2311.00425v1
- Date: Wed, 1 Nov 2023 10:35:47 GMT
- Title: Neural Implicit Field Editing Considering Object-environment Interaction
- Authors: Zhihong Zeng, Zongji Wang, Yuanben Zhang, Weinan Cai, Zehao Cao, Lili
Zhang, Yan Guo, Yanhong Zhang and Junyi Liu
- Abstract summary: We propose an Object and Scene environment Interaction aware (OSI-aware) system.
It is a novel two-stream neural rendering system considering object and scene environment interaction.
It achieves competitive performance for the rendering quality in novel-view synthesis tasks.
- Score: 5.285267388811263
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 3D scene editing method based on neural implicit field has gained wide
attention. It has achieved excellent results in 3D editing tasks. However,
existing methods often blend the interaction between objects and scene
environment. The change of scene appearance like shadows is failed to be
displayed in the rendering view. In this paper, we propose an Object and Scene
environment Interaction aware (OSI-aware) system, which is a novel two-stream
neural rendering system considering object and scene environment interaction.
To obtain illuminating conditions from the mixture soup, the system
successfully separates the interaction between objects and scene environment by
intrinsic decomposition method. To study the corresponding changes to the scene
appearance from object-level editing tasks, we introduce a depth map guided
scene inpainting method and shadow rendering method by point matching strategy.
Extensive experiments demonstrate that our novel pipeline produce reasonable
appearance changes in scene editing tasks. It also achieve competitive
performance for the rendering quality in novel-view synthesis tasks.
Related papers
- ViFu: Multiple 360$^\circ$ Objects Reconstruction with Clean Background via Visible Part Fusion [7.8788463395442045]
We propose a method to segment and recover a static, clean background and multiple 360$circ$ objects from observations of scenes at different timestamps.
Our basic idea is that, by observing the same set of objects in various arrangement, so that parts that are invisible in one scene may become visible in others.
arXiv Detail & Related papers (2024-04-15T02:44:23Z) - Style-Consistent 3D Indoor Scene Synthesis with Decoupled Objects [84.45345829270626]
Controllable 3D indoor scene synthesis stands at the forefront of technological progress.
Current methods for scene stylization are limited to applying styles to the entire scene.
We introduce a unique pipeline designed for synthesis 3D indoor scenes.
arXiv Detail & Related papers (2024-01-24T03:10:36Z) - Point'n Move: Interactive Scene Object Manipulation on Gaussian
Splatting Radiance Fields [4.5907922403638945]
Point'n Move is a method that achieves interactive scene object manipulation with exposed region inpainting.
We adopt Gaussian Splatting Radiance Field as the scene representation and fully leverage its explicit nature and speed advantage.
arXiv Detail & Related papers (2023-11-28T12:33:49Z) - ASSIST: Interactive Scene Nodes for Scalable and Realistic Indoor
Simulation [17.34617771579733]
We present ASSIST, an object-wise neural radiance field as a panoptic representation for compositional and realistic simulation.
A novel scene node data structure that stores the information of each object in a unified fashion allows online interaction in both intra- and cross-scene settings.
arXiv Detail & Related papers (2023-11-10T17:56:43Z) - DisCoScene: Spatially Disentangled Generative Radiance Fields for
Controllable 3D-aware Scene Synthesis [90.32352050266104]
DisCoScene is a 3Daware generative model for high-quality and controllable scene synthesis.
It disentangles the whole scene into object-centric generative fields by learning on only 2D images with the global-local discrimination.
We demonstrate state-of-the-art performance on many scene datasets, including the challenging outdoor dataset.
arXiv Detail & Related papers (2022-12-22T18:59:59Z) - Neural Groundplans: Persistent Neural Scene Representations from a
Single Image [90.04272671464238]
We present a method to map 2D image observations of a scene to a persistent 3D scene representation.
We propose conditional neural groundplans as persistent and memory-efficient scene representations.
arXiv Detail & Related papers (2022-07-22T17:41:24Z) - Control-NeRF: Editable Feature Volumes for Scene Rendering and
Manipulation [58.16911861917018]
We present a novel method for performing flexible, 3D-aware image content manipulation while enabling high-quality novel view synthesis.
Our model couples learnt scene-specific feature volumes with a scene agnostic neural rendering network.
We demonstrate various scene manipulations, including mixing scenes, deforming objects and inserting objects into scenes, while still producing photo-realistic results.
arXiv Detail & Related papers (2022-04-22T17:57:00Z) - Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering [42.37007176376849]
We present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering for a clustered and real-world scene.
To survive the training in heavily cluttered scenes, we propose a scene-guided training strategy to solve the 3D space ambiguity in the occluded regions and learn sharp boundaries for each object.
arXiv Detail & Related papers (2021-09-04T11:37:18Z) - Visiting the Invisible: Layer-by-Layer Completed Scene Decomposition [57.088328223220934]
Existing scene understanding systems mainly focus on recognizing the visible parts of a scene, ignoring the intact appearance of physical objects in the real-world.
In this work, we propose a higher-level scene understanding system to tackle both visible and invisible parts of objects and backgrounds in a given scene.
arXiv Detail & Related papers (2021-04-12T11:37:23Z) - Neural Scene Graphs for Dynamic Scenes [57.65413768984925]
We present the first neural rendering method that decomposes dynamic scenes into scene graphs.
We learn implicitly encoded scenes, combined with a jointly learned latent representation to describe objects with a single implicit function.
arXiv Detail & Related papers (2020-11-20T12:37:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.