Fit2Form: 3D Generative Model for Robot Gripper Form Design
- URL: http://arxiv.org/abs/2011.06498v1
- Date: Thu, 12 Nov 2020 17:09:36 GMT
- Title: Fit2Form: 3D Generative Model for Robot Gripper Form Design
- Authors: Huy Ha, Shubham Agrawal, Shuran Song
- Abstract summary: 3D shape of a robot's end-effector plays a critical role in determining it's functionality and overall performance.
Many industrial applications rely on task-specific gripper designs to ensure the system's robustness and accuracy.
The goal of this work is to use machine learning algorithms to automate the design of task-specific gripper fingers.
- Score: 17.77153086504066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 3D shape of a robot's end-effector plays a critical role in determining
it's functionality and overall performance. Many industrial applications rely
on task-specific gripper designs to ensure the system's robustness and
accuracy. However, the process of manual hardware design is both costly and
time-consuming, and the quality of the resulting design is dependent on the
engineer's experience and domain expertise, which can easily be out-dated or
inaccurate. The goal of this work is to use machine learning algorithms to
automate the design of task-specific gripper fingers. We propose Fit2Form, a 3D
generative design framework that generates pairs of finger shapes to maximize
design objectives (i.e., grasp success, stability, and robustness) for target
grasp objects. We model the design objectives by training a Fitness network to
predict their values for pairs of gripper fingers and their corresponding grasp
objects. This Fitness network then provides supervision to a 3D Generative
network that produces a pair of 3D finger geometries for the target grasp
object. Our experiments demonstrate that the proposed 3D generative design
framework generates parallel jaw gripper finger shapes that achieve more stable
and robust grasps compared to other general-purpose and task-specific gripper
design algorithms. Video can be found at https://youtu.be/utKHP3qb1bg.
Related papers
- CRAFT: Designing Creative and Functional 3D Objects [19.543575491040375]
We present a method to synthesize body-aware 3D objects from a base mesh.
The generated objects can be simulated on virtual characters, or fabricated for real-world use.
arXiv Detail & Related papers (2024-12-05T05:41:34Z) - Dynamics-Guided Diffusion Model for Robot Manipulator Design [24.703003555261482]
We present a data-driven framework for generating manipulator geometry designs for a given manipulation task.
Instead of training different design models for each task, our approach employs a learned dynamics network shared across tasks.
arXiv Detail & Related papers (2024-02-23T01:19:30Z) - Robust 3D Tracking with Quality-Aware Shape Completion [67.9748164949519]
We propose a synthetic target representation composed of dense and complete point clouds depicting the target shape precisely by shape completion for robust 3D tracking.
Specifically, we design a voxelized 3D tracking framework with shape completion, in which we propose a quality-aware shape completion mechanism to alleviate the adverse effect of noisy historical predictions.
arXiv Detail & Related papers (2023-12-17T04:50:24Z) - Learning Effective NeRFs and SDFs Representations with 3D Generative Adversarial Networks for 3D Object Generation [27.068337487647156]
We present a solution for 3D object generation of ICCV 2023 OmniObject3D Challenge.
We study learning effective NeRFs and SDFs representations with 3D Generative Adversarial Networks (GANs) for 3D object generation.
This solution is among the top 3 in the ICCV 2023 OmniObject3D Challenge.
arXiv Detail & Related papers (2023-09-28T02:23:46Z) - SketchMetaFace: A Learning-based Sketching Interface for High-fidelity
3D Character Face Modeling [69.28254439393298]
SketchMetaFace is a sketching system targeting amateur users to model high-fidelity 3D faces in minutes.
We develop a novel learning-based method termed "Implicit and Depth Guided Mesh Modeling" (IDGMM)
It fuses the advantages of mesh, implicit, and depth representations to achieve high-quality results with high efficiency.
arXiv Detail & Related papers (2023-07-03T07:41:07Z) - DeformerNet: Learning Bimanual Manipulation of 3D Deformable Objects [13.138509669247508]
Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the manipulated object and a point cloud of the goal shape.
This shape embedding enables the robot to learn a visual servo controller that computes the desired robot end-effector action to
arXiv Detail & Related papers (2023-05-08T04:08:06Z) - Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer? [111.11502241431286]
Vision Transformers (ViTs) have proven to be effective in solving 2D image understanding tasks.
ViTs for 2D and 3D tasks have so far adopted vastly different architecture designs that are hardly transferable.
This paper demonstrates the appealing promise to understand the 3D visual world, using a standard 2D ViT architecture.
arXiv Detail & Related papers (2022-09-15T03:34:58Z) - Learning Visual Shape Control of Novel 3D Deformable Objects from
Partial-View Point Clouds [7.1659268120093635]
Analytic models of elastic, 3D deformable objects require numerous parameters to describe the potentially infinite degrees of freedom present in determining the object's shape.
Previous attempts at performing 3D shape control rely on hand-crafted features to represent the object shape and require training of object-specific control models.
We overcome these issues through the use of our novel DeformerNet neural network architecture, which operates on a partial-view point cloud of the object being manipulated and a point cloud of the goal shape to learn a low-dimensional representation of the object shape.
arXiv Detail & Related papers (2021-10-10T02:34:57Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - Unsupervised Learning of Visual 3D Keypoints for Control [104.92063943162896]
Learning sensorimotor control policies from high-dimensional images crucially relies on the quality of the underlying visual representations.
We propose a framework to learn such a 3D geometric structure directly from images in an end-to-end unsupervised manner.
These discovered 3D keypoints tend to meaningfully capture robot joints as well as object movements in a consistent manner across both time and 3D space.
arXiv Detail & Related papers (2021-06-14T17:59:59Z) - Building-GAN: Graph-Conditioned Architectural Volumetric Design
Generation [10.024367148266721]
This paper focuses on volumetric design generation conditioned on an input program graph.
Instead of outputting dense 3D voxels, we propose a new 3D representation named voxel graph that is both compact and expressive for building geometries.
Our generator is a cross-modal graph neural network that uses a pointer mechanism to connect the input program graph and the output voxel graph, and the whole pipeline is trained using the adversarial framework.
arXiv Detail & Related papers (2021-04-27T16:49:34Z) - Interactive Annotation of 3D Object Geometry using 2D Scribbles [84.51514043814066]
In this paper, we propose an interactive framework for annotating 3D object geometry from point cloud data and RGB imagery.
Our framework targets naive users without artistic or graphics expertise.
arXiv Detail & Related papers (2020-08-24T21:51:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.