Point'n Move: Interactive Scene Object Manipulation on Gaussian
Splatting Radiance Fields
- URL: http://arxiv.org/abs/2311.16737v1
- Date: Tue, 28 Nov 2023 12:33:49 GMT
- Title: Point'n Move: Interactive Scene Object Manipulation on Gaussian
Splatting Radiance Fields
- Authors: Jiajun Huang, Hongchuan Yu
- Abstract summary: Point'n Move is a method that achieves interactive scene object manipulation with exposed region inpainting.
We adopt Gaussian Splatting Radiance Field as the scene representation and fully leverage its explicit nature and speed advantage.
- Score: 4.5907922403638945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Point'n Move, a method that achieves interactive scene object
manipulation with exposed region inpainting. Interactivity here further comes
from intuitive object selection and real-time editing. To achieve this, we
adopt Gaussian Splatting Radiance Field as the scene representation and fully
leverage its explicit nature and speed advantage. Its explicit representation
formulation allows us to devise a 2D prompt points to 3D mask dual-stage
self-prompting segmentation algorithm, perform mask refinement and merging,
minimize change as well as provide good initialization for scene inpainting and
perform editing in real-time without per-editing training, all leads to
superior quality and performance. We test our method by performing editing on
both forward-facing and 360 scenes. We also compare our method against existing
scene object removal methods, showing superior quality despite being more
capable and having a speed advantage.
Related papers
- Efficient Dynamic Scene Editing via 4D Gaussian-based Static-Dynamic Separation [25.047474784265773]
Recent 4D dynamic scene editing methods require editing thousands of 2D images used for dynamic scene synthesis.
These methods are not scalable with respect to the temporal dimension of the dynamic scene.
We propose an efficient dynamic scene editing method that is more scalable in terms of temporal dimension.
arXiv Detail & Related papers (2025-02-04T08:18:49Z) - PrEditor3D: Fast and Precise 3D Shape Editing [100.09112677669376]
We propose a training-free approach to 3D editing that enables the editing of a single shape within a few minutes.
The edited 3D mesh aligns well with the prompts, and remains identical for regions that are not intended to be altered.
arXiv Detail & Related papers (2024-12-09T15:44:47Z) - CTRL-D: Controllable Dynamic 3D Scene Editing with Personalized 2D Diffusion [13.744253074367885]
We introduce a novel framework that first fine-tunes the InstructPix2Pix model, followed by a two-stage optimization of the scene.
Our approach enables consistent and precise local edits without the need for tracking desired editing regions.
Compared to state-of-the-art methods, our approach offers more flexible and controllable local scene editing.
arXiv Detail & Related papers (2024-12-02T18:38:51Z) - 3DitScene: Editing Any Scene via Language-guided Disentangled Gaussian Splatting [100.94916668527544]
Existing methods solely focus on either 2D individual object or 3D global scene editing.
We propose 3DitScene, a novel and unified scene editing framework.
It enables seamless editing from 2D to 3D, allowing precise control over scene composition and individual objects.
arXiv Detail & Related papers (2024-05-28T17:59:01Z) - RefFusion: Reference Adapted Diffusion Models for 3D Scene Inpainting [63.567363455092234]
RefFusion is a novel 3D inpainting method based on a multi-scale personalization of an image inpainting diffusion model to the given reference view.
Our framework achieves state-of-the-art results for object removal while maintaining high controllability.
arXiv Detail & Related papers (2024-04-16T17:50:02Z) - ZONE: Zero-Shot Instruction-Guided Local Editing [56.56213730578504]
We propose a Zero-shot instructiON-guided local image Editing approach, termed ZONE.
We first convert the editing intent from the user-provided instruction into specific image editing regions through InstructPix2Pix.
We then propose a Region-IoU scheme for precise image layer extraction from an off-the-shelf segment model.
arXiv Detail & Related papers (2023-12-28T02:54:34Z) - Learning Naturally Aggregated Appearance for Efficient 3D Editing [90.57414218888536]
We learn the color field as an explicit 2D appearance aggregation, also called canonical image.
We complement the canonical image with a projection field that maps 3D points onto 2D pixels for texture query.
Our approach demonstrates remarkable efficiency by being at least 20 times faster per edit compared to existing NeRF-based editing methods.
arXiv Detail & Related papers (2023-12-11T18:59:31Z) - Neural Implicit Field Editing Considering Object-environment Interaction [5.285267388811263]
We propose an Object and Scene environment Interaction aware (OSI-aware) system.
It is a novel two-stream neural rendering system considering object and scene environment interaction.
It achieves competitive performance for the rendering quality in novel-view synthesis tasks.
arXiv Detail & Related papers (2023-11-01T10:35:47Z) - OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation
with Neural Radiance Fields [53.32527220134249]
The emergence of Neural Radiance Fields (NeRF) for novel view synthesis has increased interest in 3D scene editing.
Current methods face challenges such as time-consuming object labeling, limited capability to remove specific targets, and compromised rendering quality after removal.
This paper proposes a novel object-removing pipeline, named OR-NeRF, that can remove objects from 3D scenes with user-given points or text prompts on a single view.
arXiv Detail & Related papers (2023-05-17T18:18:05Z) - SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural
Radiance Fields [26.296017756560467]
In 3D, solutions must be consistent across multiple views and geometrically valid.
We propose a novel 3D inpainting method that addresses these challenges.
We first demonstrate the superiority of our approach on multiview segmentation, comparing to NeRFbased methods and 2D segmentation approaches.
arXiv Detail & Related papers (2022-11-22T13:14:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.