DeepJoin: Learning a Joint Occupancy, Signed Distance, and Normal Field
Function for Shape Repair
- URL: http://arxiv.org/abs/2211.12400v1
- Date: Tue, 22 Nov 2022 16:44:57 GMT
- Title: DeepJoin: Learning a Joint Occupancy, Signed Distance, and Normal Field
Function for Shape Repair
- Authors: Nikolas Lamb, Sean Banerjee, Natasha Kholgade Banerjee
- Abstract summary: DeepJoin is an automated approach to generate high-resolution repairs for fractured shapes using deep neural networks.
We present a novel implicit shape representation for fractured shape repair that combines the occupancy function, signed distance function, and normal field.
- Score: 0.684225774857327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce DeepJoin, an automated approach to generate high-resolution
repairs for fractured shapes using deep neural networks. Existing approaches to
perform automated shape repair operate exclusively on symmetric objects,
require a complete proxy shape, or predict restoration shapes using
low-resolution voxels which are too coarse for physical repair. We generate a
high-resolution restoration shape by inferring a corresponding complete shape
and a break surface from an input fractured shape. We present a novel implicit
shape representation for fractured shape repair that combines the occupancy
function, signed distance function, and normal field. We demonstrate repairs
using our approach for synthetically fractured objects from ShapeNet, 3D scans
from the Google Scanned Objects dataset, objects in the style of ancient Greek
pottery from the QP Cultural Heritage dataset, and real fractured objects. We
outperform three baseline approaches in terms of chamfer distance and normal
consistency. Unlike existing approaches and restorations using subtraction,
DeepJoin restorations do not exhibit surface artifacts and join closely to the
fractured region of the fractured shape. Our code is available at:
https://github.com/Terascale-All-sensing-Research-Studio/DeepJoin.
Related papers
- 3D Shape Completion with Test-Time Training [6.764513343390546]
We use a decoder network motivated by related work on the prediction of signed distance functions (DeepSDF)
We demonstrate that our overfitting to the fractured parts leads to significant improvements in the restoration of eight different shape categories of the ShapeNet data set in terms of their chamfer distances.
arXiv Detail & Related papers (2024-10-24T11:59:32Z) - ShapeGraFormer: GraFormer-Based Network for Hand-Object Reconstruction from a Single Depth Map [11.874184782686532]
We propose the first approach for realistic 3D hand-object shape and pose reconstruction from a single depth map.
Our pipeline additionally predicts voxelized hand-object shapes, having a one-to-one mapping to the input voxelized depth.
In addition, we show the impact of adding another GraFormer component that refines the reconstructed shapes based on the hand-object interactions.
arXiv Detail & Related papers (2023-10-18T09:05:57Z) - Pix2Repair: Implicit Shape Restoration from Images [7.663519916453075]
Pix2Repair takes an image of a fractured object as input and automatically generates a 3D printable restoration shape.
We also introduce Fantastic Breaks Imaged, the first large-scale dataset of 11,653 real-world images of fractured objects.
arXiv Detail & Related papers (2023-05-29T17:48:09Z) - Fantastic Breaks: A Dataset of Paired 3D Scans of Real-World Broken
Objects and Their Complete Counterparts [0.5572870549559665]
We present Fantastic Breaks, a dataset containing scanned, waterproofed, and cleaned 3D meshes for 150 broken objects.
Fantastic Breaks contains class and material labels, proxy repair parts that join to broken meshes, and manually annotated fracture boundaries.
We show experimental shape repair evaluation with Fantastic Breaks using multiple learning-based approaches.
arXiv Detail & Related papers (2023-03-24T17:03:40Z) - DeepMend: Learning Occupancy Functions to Represent Shape for Repair [0.6087960723103347]
DeepMend is a novel approach to reconstruct restorations to fractured shapes using learned occupancy functions.
We represent the occupancy of a fractured shape as the conjunction of the occupancy of an underlying complete shape and the fracture surface.
We show results with simulated fractures on synthetic and real-world scanned objects, and with scanned real fractured mugs.
arXiv Detail & Related papers (2022-10-11T18:42:20Z) - Single-view 3D Mesh Reconstruction for Seen and Unseen Categories [69.29406107513621]
Single-view 3D Mesh Reconstruction is a fundamental computer vision task that aims at recovering 3D shapes from single-view RGB images.
This paper tackles Single-view 3D Mesh Reconstruction, to study the model generalization on unseen categories.
We propose an end-to-end two-stage network, GenMesh, to break the category boundaries in reconstruction.
arXiv Detail & Related papers (2022-08-04T14:13:35Z) - Neural Template: Topology-aware Reconstruction and Disentangled
Generation of 3D Meshes [52.038346313823524]
This paper introduces a novel framework called DTNet for 3D mesh reconstruction and generation via Disentangled Topology.
Our method is able to produce high-quality meshes, particularly with diverse topologies, as compared with the state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T08:32:57Z) - A-SDF: Learning Disentangled Signed Distance Functions for Articulated
Shape Representation [62.517760545209065]
We introduce Articulated Signed Distance Functions (A-SDF) to represent articulated shapes with a disentangled latent space.
We demonstrate our model generalize well to out-of-distribution and unseen data, e.g., partial point clouds and real-world depth images.
arXiv Detail & Related papers (2021-04-15T17:53:54Z) - From Points to Multi-Object 3D Reconstruction [71.17445805257196]
We propose a method to detect and reconstruct multiple 3D objects from a single RGB image.
A keypoint detector localizes objects as center points and directly predicts all object properties, including 9-DoF bounding boxes and 3D shapes.
The presented approach performs lightweight reconstruction in a single-stage, it is real-time capable, fully differentiable and end-to-end trainable.
arXiv Detail & Related papers (2020-12-21T18:52:21Z) - Geo-PIFu: Geometry and Pixel Aligned Implicit Functions for Single-view
Human Reconstruction [97.3274868990133]
Geo-PIFu is a method to recover a 3D mesh from a monocular color image of a clothed person.
We show that, by both encoding query points and constraining global shape using latent voxel features, the reconstruction we obtain for clothed human meshes exhibits less shape distortion and improved surface details compared to competing methods.
arXiv Detail & Related papers (2020-06-15T01:11:48Z) - Monocular Human Pose and Shape Reconstruction using Part Differentiable
Rendering [53.16864661460889]
Recent works succeed in regression-based methods which estimate parametric models directly through a deep neural network supervised by 3D ground truth.
In this paper, we introduce body segmentation as critical supervision.
To improve the reconstruction with part segmentation, we propose a part-level differentiable part that enables part-based models to be supervised by part segmentation.
arXiv Detail & Related papers (2020-03-24T14:25:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.