SPLATART: Articulated Gaussian Splatting with Estimated Object Structure
- URL: http://arxiv.org/abs/2506.12184v1
- Date: Fri, 13 Jun 2025 19:20:07 GMT
- Title: SPLATART: Articulated Gaussian Splatting with Estimated Object Structure
- Authors: Stanley Lewis, Vishal Chandra, Tom Gao, Odest Chadwicke Jenkins,
- Abstract summary: SPLATART is a pipeline for learning representations of articulated objects from posed images.<n>We present data on the pipeline as applied to the syntheic Paris dataset objects.<n>We additionally present on articulated serial chain manipulators to demonstrate usage on deeper kinematic tree structures.
- Score: 6.863499171366721
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Representing articulated objects remains a difficult problem within the field of robotics. Objects such as pliers, clamps, or cabinets require representations that capture not only geometry and color information, but also part seperation, connectivity, and joint parametrization. Furthermore, learning these representations becomes even more difficult with each additional degree of freedom. Complex articulated objects such as robot arms may have seven or more degrees of freedom, and the depth of their kinematic tree may be notably greater than the tools, drawers, and cabinets that are the typical subjects of articulated object research. To address these concerns, we introduce SPLATART - a pipeline for learning Gaussian splat representations of articulated objects from posed images, of which a subset contains image space part segmentations. SPLATART disentangles the part separation task from the articulation estimation task, allowing for post-facto determination of joint estimation and representation of articulated objects with deeper kinematic trees than previously exhibited. In this work, we present data on the SPLATART pipeline as applied to the syntheic Paris dataset objects, and qualitative results on a real-world object under spare segmentation supervision. We additionally present on articulated serial chain manipulators to demonstrate usage on deeper kinematic tree structures.
Related papers
- ObjectGS: Object-aware Scene Reconstruction and Scene Understanding via Gaussian Splatting [54.92763171355442]
ObjectGS is an object-aware framework that unifies 3D scene reconstruction with semantic understanding.<n>We show through experiments that ObjectGS outperforms state-of-the-art methods on open-vocabulary and panoptic segmentation tasks.
arXiv Detail & Related papers (2025-07-21T10:06:23Z) - ArtGS: Building Interactable Replicas of Complex Articulated Objects via Gaussian Splatting [66.29782808719301]
Building articulated objects is a key challenge in computer vision.<n>Existing methods often fail to effectively integrate information across different object states.<n>We introduce ArtGS, a novel approach that leverages 3D Gaussians as a flexible and efficient representation.
arXiv Detail & Related papers (2025-02-26T10:25:32Z) - Articulate AnyMesh: Open-Vocabulary 3D Articulated Objects Modeling [48.78204955169967]
Articulate Anymesh is an automated framework that is able to convert rigid 3D mesh into its articulated counterpart in an open-vocabulary manner.<n>Our experiments show that Articulate Anymesh can generate large-scale, high-quality 3D articulated objects, including tools, toys, mechanical devices, and vehicles.
arXiv Detail & Related papers (2025-02-04T18:59:55Z) - NARF24: Estimating Articulated Object Structure for Implicit Rendering [8.044069980286812]
We propose a method that learns a common Neural Radiance Field (NeRF) representation across a small number of collected scenes.
This representation is combined with a parts-based image segmentation to produce an implicit space part localization.
arXiv Detail & Related papers (2024-09-15T19:06:46Z) - ICGNet: A Unified Approach for Instance-Centric Grasping [42.92991092305974]
We introduce an end-to-end architecture for object-centric grasping.
We show the effectiveness of the proposed method by extensively evaluating it against state-of-the-art methods on synthetic datasets.
arXiv Detail & Related papers (2024-01-18T12:41:41Z) - Full-Body Articulated Human-Object Interaction [61.01135739641217]
CHAIRS is a large-scale motion-captured f-AHOI dataset consisting of 16.2 hours of versatile interactions.
CHAIRS provides 3D meshes of both humans and articulated objects during the entire interactive process.
By learning the geometrical relationships in HOI, we devise the very first model that leverage human pose estimation.
arXiv Detail & Related papers (2022-12-20T19:50:54Z) - Structure from Action: Learning Interactions for Articulated Object 3D
Structure Discovery [18.96346371296251]
We introduce Structure from Action (SfA), a framework to discover 3D part geometry and joint parameters of unseen articulated objects.
By selecting informative interactions, SfA discovers parts and reveals occluded surfaces, like the inside of a closed drawer.
Empirically, SfA outperforms a pipeline of state-of-the-art components by 25.4 3D IoU percentage points on unseen categories.
arXiv Detail & Related papers (2022-07-19T00:27:36Z) - Object Scene Representation Transformer [56.40544849442227]
We introduce Object Scene Representation Transformer (OSRT), a 3D-centric model in which individual object representations naturally emerge through novel view synthesis.
OSRT scales to significantly more complex scenes with larger diversity of objects and backgrounds than existing methods.
It is multiple orders of magnitude faster at compositional rendering thanks to its light field parametrization and the novel Slot Mixer decoder.
arXiv Detail & Related papers (2022-06-14T15:40:47Z) - Category-Independent Articulated Object Tracking with Factor Graphs [14.574389906480867]
Articulated objects come with unexpected articulation mechanisms that are inconsistent with categorical priors.
We propose a category-independent framework for predicting the articulation models of unknown objects from sequences of RGB-D images.
We demonstrate that our visual perception and factor graph modules outperform baselines on simulated data and show the applicability of our factor graph on real world data.
arXiv Detail & Related papers (2022-05-07T20:59:44Z) - Watch It Move: Unsupervised Discovery of 3D Joints for Re-Posing of
Articulated Objects [73.23249640099516]
We learn both the appearance and the structure of previously unseen articulated objects by observing them move from multiple views.
Our insight is that adjacent parts that move relative to each other must be connected by a joint.
We show that our method works for different structures, from quadrupeds, to single-arm robots, to humans.
arXiv Detail & Related papers (2021-12-21T16:37:48Z) - Learning Rope Manipulation Policies Using Dense Object Descriptors
Trained on Synthetic Depth Data [32.936908766549344]
We present an approach that learns point-pair correspondences between initial and goal rope configurations.
In 50 trials of a knot-tying task with the ABB YuMi Robot, the system achieves a 66% knot-tying success rate from previously unseen configurations.
arXiv Detail & Related papers (2020-03-03T23:43:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.