Mates2Motion: Learning How Mechanical CAD Assemblies Work
- URL: http://arxiv.org/abs/2208.01779v2
- Date: Thu, 4 May 2023 22:39:40 GMT
- Title: Mates2Motion: Learning How Mechanical CAD Assemblies Work
- Authors: James Noeckel, Benjamin T. Jones, Karl Willis, Brian Curless, Adriana
Schulz
- Abstract summary: We train our model using a large dataset of real-world mechanical assemblies consisting of CAD parts and mates joining them together.
We present methods for re-defining these mates to make them better reflect the motion of the assembly, as well as narrowing down the possible axes of motion.
- Score: 7.987370879817241
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We describe our work on inferring the degrees of freedom between mated parts
in mechanical assemblies using deep learning on CAD representations. We train
our model using a large dataset of real-world mechanical assemblies consisting
of CAD parts and mates joining them together. We present methods for
re-defining these mates to make them better reflect the motion of the assembly,
as well as narrowing down the possible axes of motion. We also conduct a user
study to create a motion-annotated test set with more reliable labels.
Related papers
- LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning [50.99807031490589]
We introduce LLARVA, a model trained with a novel instruction tuning method to unify a range of robotic learning tasks, scenarios, and environments.
We generate 8.5M image-visual trace pairs from the Open X-Embodiment dataset in order to pre-train our model.
Experiments yield strong performance, demonstrating that LLARVA performs well compared to several contemporary baselines.
arXiv Detail & Related papers (2024-06-17T17:55:29Z) - Self-supervised Graph Neural Network for Mechanical CAD Retrieval [29.321027284348272]
GC-CAD is a self-supervised contrastive graph neural network-based method for mechanical CAD retrieval.
The proposed method achieves significant accuracy improvements and up to 100 times efficiency improvement over the baseline methods.
arXiv Detail & Related papers (2024-06-13T06:56:49Z) - RPMArt: Towards Robust Perception and Manipulation for Articulated Objects [56.73978941406907]
We propose a framework towards Robust Perception and Manipulation for Articulated Objects ( RPMArt)
RPMArt learns to estimate the articulation parameters and manipulate the articulation part from the noisy point cloud.
We introduce an articulation-aware classification scheme to enhance its ability for sim-to-real transfer.
arXiv Detail & Related papers (2024-03-24T05:55:39Z) - Learning Reusable Manipulation Strategies [86.07442931141634]
Humans demonstrate an impressive ability to acquire and generalize manipulation "tricks"
We present a framework that enables machines to acquire such manipulation skills through a single demonstration and self-play.
These learned mechanisms and samplers can be seamlessly integrated into standard task and motion planners.
arXiv Detail & Related papers (2023-11-06T17:35:42Z) - Kinematic-aware Prompting for Generalizable Articulated Object
Manipulation with LLMs [53.66070434419739]
Generalizable articulated object manipulation is essential for home-assistant robots.
We propose a kinematic-aware prompting framework that prompts Large Language Models with kinematic knowledge of objects to generate low-level motion waypoints.
Our framework outperforms traditional methods on 8 categories seen and shows a powerful zero-shot capability for 8 unseen articulated object categories.
arXiv Detail & Related papers (2023-11-06T03:26:41Z) - Self-Supervised Representation Learning for CAD [19.5326204665895]
This work proposes to leverage unlabeled CAD geometry on supervised learning tasks.
We learn a novel, hybrid implicit/explicit surface representation for B-Rep geometry.
arXiv Detail & Related papers (2022-10-19T18:00:18Z) - JoinABLe: Learning Bottom-up Assembly of Parametric CAD Joints [34.15876903985372]
JoinABLe is a learning-based method that assembles parts together to form joints.
Our results show that by making network predictions over a graph representation of solid models we can outperform multiple baseline methods with an accuracy (79.53%) that approaches human performance (80%)
arXiv Detail & Related papers (2021-11-24T20:05:59Z) - V-MAO: Generative Modeling for Multi-Arm Manipulation of Articulated
Objects [51.79035249464852]
We present a framework for learning multi-arm manipulation of articulated objects.
Our framework includes a variational generative model that learns contact point distribution over object rigid parts for each robot arm.
arXiv Detail & Related papers (2021-11-07T02:31:09Z) - SB-GCN: Structured BREP Graph Convolutional Network for Automatic Mating
of CAD Assemblies [3.732457298487595]
Assembly modeling is not directly applicable to modern CAD systems because it eschews the dominant data structure of modern CAD: parametric boundary representations (BREPs)
We propose SB-GCN, a representation learning scheme on BREPs that retains the topological structure of parts, and use these learned representations to predict CAD type mates.
arXiv Detail & Related papers (2021-05-25T22:07:55Z) - Reconstructing Interactive 3D Scenes by Panoptic Mapping and CAD Model
Alignments [81.38641691636847]
We rethink the problem of scene reconstruction from an embodied agent's perspective.
We reconstruct an interactive scene using RGB-D data stream.
This reconstructed scene replaces the object meshes in the dense panoptic map with part-based articulated CAD models.
arXiv Detail & Related papers (2021-03-30T05:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.