Scalable Geometric Fracture Assembly via Co-creation Space among
Assemblers
- URL: http://arxiv.org/abs/2312.12340v4
- Date: Mon, 15 Jan 2024 04:27:04 GMT
- Title: Scalable Geometric Fracture Assembly via Co-creation Space among
Assemblers
- Authors: Ruiyuan Zhang and Jiaxiang Liu and Zexi Li and Hao Dong and Jie Fu and
Chao Wu
- Abstract summary: We develop a scalable framework for geometric fracture assembly without relying on semantic information.
We introduce a novel loss function, i.e., the geometric-based collision loss, to address collision issues during the fracture assembly process.
Our framework exhibits better performance on both PartNet and Breaking Bad datasets compared to existing state-of-the-art frameworks.
- Score: 24.89380678499307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geometric fracture assembly presents a challenging practical task in
archaeology and 3D computer vision. Previous methods have focused solely on
assembling fragments based on semantic information, which has limited the
quantity of objects that can be effectively assembled. Therefore, there is a
need to develop a scalable framework for geometric fracture assembly without
relying on semantic information. To improve the effectiveness of assembling
geometric fractures without semantic information, we propose a co-creation
space comprising several assemblers capable of gradually and unambiguously
assembling fractures. Additionally, we introduce a novel loss function, i.e.,
the geometric-based collision loss, to address collision issues during the
fracture assembly process and enhance the results. Our framework exhibits
better performance on both PartNet and Breaking Bad datasets compared to
existing state-of-the-art frameworks. Extensive experiments and quantitative
comparisons demonstrate the effectiveness of our proposed framework, which
features linear computational complexity, enhanced abstraction, and improved
generalization. Our code is publicly available at
https://github.com/Ruiyuan-Zhang/CCS.
Related papers
- 3D Geometric Shape Assembly via Efficient Point Cloud Matching [59.241448711254485]
We introduce Proxy Match Transform (PMT), an approximate high-order feature transform layer that enables reliable matching between mating surfaces of parts.
Building upon PMT, we introduce a new framework, dubbed Proxy Match TransformeR (PMTR), for the geometric assembly task.
We evaluate the proposed PMTR on the large-scale 3D geometric shape assembly benchmark dataset of Breaking Bad.
arXiv Detail & Related papers (2024-07-15T08:50:02Z) - Breaking Bad: A Dataset for Geometric Fracture and Reassembly [47.2247928468233]
We introduce Breaking Bad, a large-scale dataset of fractured objects.
Our dataset consists of over one million fractured objects simulated from ten thousand base models.
arXiv Detail & Related papers (2022-10-20T17:57:19Z) - 3D Part Assembly Generation with Instance Encoded Transformer [22.330218525999857]
We propose a multi-layer transformer-based framework that involves geometric and relational reasoning between parts to update the part poses iteratively.
We extend our framework to a new task called in-process part assembly.
Our method achieves far more than 10% improvements over the current state-of-the-art in multiple metrics on the public PartNet dataset.
arXiv Detail & Related papers (2022-07-05T02:40:57Z) - Contextual Model Aggregation for Fast and Robust Federated Learning in
Edge Computing [88.76112371510999]
Federated learning is a prime candidate for distributed machine learning at the network edge.
Existing algorithms face issues with slow convergence and/or robustness of performance.
We propose a contextual aggregation scheme that achieves the optimal context-dependent bound on loss reduction.
arXiv Detail & Related papers (2022-03-23T21:42:31Z) - Unsupervised Part Discovery from Contrastive Reconstruction [90.88501867321573]
The goal of self-supervised visual representation learning is to learn strong, transferable image representations.
We propose an unsupervised approach to object part discovery and segmentation.
Our method yields semantic parts consistent across fine-grained but visually distinct categories.
arXiv Detail & Related papers (2021-11-11T17:59:42Z) - RGL-NET: A Recurrent Graph Learning framework for Progressive Part
Assembly [30.143946636770025]
We tackle the problem of developing a generalized framework for assembly robust to structural variants.
Our network can learn more plausible predictions of shape structure by accounting for priorly assembled parts.
Our resulting latent space facilitates exciting applications such as shape recovery from the point-cloud components.
arXiv Detail & Related papers (2021-07-27T14:47:43Z) - Unsupervised Part Segmentation through Disentangling Appearance and
Shape [37.206922180245265]
We study the problem of unsupervised discovery and segmentation of object parts.
Recent unsupervised methods have greatly relaxed the dependency on annotated data.
We develop a novel approach by disentangling the appearance and shape representations of object parts.
arXiv Detail & Related papers (2021-05-26T08:59:31Z) - Image Co-skeletonization via Co-segmentation [102.59781674888657]
We propose a new joint processing topic: image co-skeletonization.
Object skeletonization in a single natural image is a challenging problem because there is hardly any prior knowledge about the object.
We propose a coupled framework for co-skeletonization and co-segmentation tasks so that they are well informed by each other.
arXiv Detail & Related papers (2020-04-12T09:35:54Z) - Convolutional Occupancy Networks [88.48287716452002]
We propose Convolutional Occupancy Networks, a more flexible implicit representation for detailed reconstruction of objects and 3D scenes.
By combining convolutional encoders with implicit occupancy decoders, our model incorporates inductive biases, enabling structured reasoning in 3D space.
We empirically find that our method enables the fine-grained implicit 3D reconstruction of single objects, scales to large indoor scenes, and generalizes well from synthetic to real data.
arXiv Detail & Related papers (2020-03-10T10:17:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.