PUGS: Zero-shot Physical Understanding with Gaussian Splatting
- URL: http://arxiv.org/abs/2502.12231v1
- Date: Mon, 17 Feb 2025 18:59:42 GMT
- Title: PUGS: Zero-shot Physical Understanding with Gaussian Splatting
- Authors: Yinghao Shuai, Ran Yu, Yuantao Chen, Zijian Jiang, Xiaowei Song, Nan Wang, Jv Zheng, Jianzhu Ma, Meng Yang, Zhicheng Wang, Wenbo Ding, Hao Zhao,
- Abstract summary: Current robotic systems can understand categories and poses of objects well.
But understanding physical properties like mass, friction, and hardness, in the wild, remains challenging.
We propose a new method that reconstructs 3D objects using the Gaussian splatting representation and predicts various physical properties in a zero-shot manner.
- Score: 23.343865500013887
- License:
- Abstract: Current robotic systems can understand the categories and poses of objects well. But understanding physical properties like mass, friction, and hardness, in the wild, remains challenging. We propose a new method that reconstructs 3D objects using the Gaussian splatting representation and predicts various physical properties in a zero-shot manner. We propose two techniques during the reconstruction phase: a geometry-aware regularization loss function to improve the shape quality and a region-aware feature contrastive loss function to promote region affinity. Two other new techniques are designed during inference: a feature-based property propagation module and a volume integration module tailored for the Gaussian representation. Our framework is named as zero-shot physical understanding with Gaussian splatting, or PUGS. PUGS achieves new state-of-the-art results on the standard benchmark of ABO-500 mass prediction. We provide extensive quantitative ablations and qualitative visualization to demonstrate the mechanism of our designs. We show the proposed methodology can help address challenging real-world grasping tasks. Our codes, data, and models are available at https://github.com/EverNorif/PUGS
Related papers
- Urban4D: Semantic-Guided 4D Gaussian Splatting for Urban Scene Reconstruction [86.4386398262018]
Urban4D is a semantic-guided decomposition strategy inspired by advances in deep 2D semantic map generation.
Our approach distinguishes potentially dynamic objects through reliable semantic Gaussians.
Experiments on real-world datasets demonstrate that Urban4D achieves comparable or better quality than previous state-of-the-art methods.
arXiv Detail & Related papers (2024-12-04T16:59:49Z) - HFGaussian: Learning Generalizable Gaussian Human with Integrated Human Features [23.321087432786605]
We present a novel approach called HFGaussian that can estimate novel views and human features, such as the 3D skeleton, 3D key points, and dense pose, from sparse input images in real time at 25 FPS.
We thoroughly evaluate our HFGaussian method against the latest state-of-the-art techniques in human Gaussian splatting and pose estimation, demonstrating its real-time, state-of-the-art performance.
arXiv Detail & Related papers (2024-11-05T13:31:04Z) - Masked Generative Extractor for Synergistic Representation and 3D Generation of Point Clouds [6.69660410213287]
We propose an innovative framework called Point-MGE to explore the benefits of deeply integrating 3D representation learning and generative learning.
In shape classification, Point-MGE achieved an accuracy of 94.2% (+1.0%) on the ModelNet40 dataset and 92.9% (+5.5%) on the ScanObjectNN dataset.
Experimental results also confirmed that Point-MGE can generate high-quality 3D shapes in both unconditional and conditional settings.
arXiv Detail & Related papers (2024-06-25T07:57:03Z) - GIC: Gaussian-Informed Continuum for Physical Property Identification and Simulation [60.33467489955188]
This paper studies the problem of estimating physical properties (system identification) through visual observations.
To facilitate geometry-aware guidance in physical property estimation, we introduce a novel hybrid framework.
We propose a new dynamic 3D Gaussian framework based on motion factorization to recover the object as 3D Gaussian point sets.
In addition to the extracted object surfaces, the Gaussian-informed continuum also enables the rendering of object masks during simulations.
arXiv Detail & Related papers (2024-06-21T07:37:17Z) - Effective Rank Analysis and Regularization for Enhanced 3D Gaussian Splatting [33.01987451251659]
3D Gaussian Splatting (3DGS) has emerged as a promising technique capable of real-time rendering with high-quality 3D reconstruction.
Despite its potential, 3DGS encounters challenges such as needle-like artifacts, suboptimal geometries, and inaccurate normals.
We introduce the effective rank as a regularization, which constrains the structure of the Gaussians.
arXiv Detail & Related papers (2024-06-17T15:51:59Z) - SAGS: Structure-Aware 3D Gaussian Splatting [53.6730827668389]
We propose a structure-aware Gaussian Splatting method (SAGS) that implicitly encodes the geometry of the scene.
SAGS reflects to state-of-the-art rendering performance and reduced storage requirements on benchmark novel-view synthesis datasets.
arXiv Detail & Related papers (2024-04-29T23:26:30Z) - HUGS: Holistic Urban 3D Scene Understanding via Gaussian Splatting [53.6394928681237]
holistic understanding of urban scenes based on RGB images is a challenging yet important problem.
Our main idea involves the joint optimization of geometry, appearance, semantics, and motion using a combination of static and dynamic 3D Gaussians.
Our approach offers the ability to render new viewpoints in real-time, yielding 2D and 3D semantic information with high accuracy.
arXiv Detail & Related papers (2024-03-19T13:39:05Z) - Mesh-based Gaussian Splatting for Real-time Large-scale Deformation [58.18290393082119]
It is challenging for users to directly deform or manipulate implicit representations with large deformations in the real-time fashion.
We develop a novel GS-based method that enables interactive deformation.
Our approach achieves high-quality reconstruction and effective deformation, while maintaining the promising rendering results at a high frame rate.
arXiv Detail & Related papers (2024-02-07T12:36:54Z) - PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics [22.4647573375673]
We introduce PhysGaussian, a new method that seamlessly integrates physically grounded Newtonian dynamics within 3D Gaussians.
A defining characteristic of our method is the seamless integration between physical simulation and visual rendering.
Our method demonstrates exceptional versatility across a wide variety of materials.
arXiv Detail & Related papers (2023-11-20T21:34:52Z) - Coarse-to-fine Animal Pose and Shape Estimation [67.39635503744395]
We propose a coarse-to-fine approach to reconstruct 3D animal mesh from a single image.
The coarse estimation stage first estimates the pose, shape and translation parameters of the SMAL model.
The estimated meshes are then used as a starting point by a graph convolutional network (GCN) to predict a per-vertex deformation in the refinement stage.
arXiv Detail & Related papers (2021-11-16T01:27:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.