Info3D: Representation Learning on 3D Objects using Mutual Information
Maximization and Contrastive Learning
- URL: http://arxiv.org/abs/2006.02598v2
- Date: Sat, 22 Aug 2020 22:12:57 GMT
- Title: Info3D: Representation Learning on 3D Objects using Mutual Information
Maximization and Contrastive Learning
- Authors: Aditya Sanghi
- Abstract summary: We propose to extend the InfoMax and contrastive learning principles on 3D shapes.
We show that we can maximize the mutual information between 3D objects and their "chunks" to improve the representations in aligned datasets.
- Score: 8.448611728105513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A major endeavor of computer vision is to represent, understand and extract
structure from 3D data. Towards this goal, unsupervised learning is a powerful
and necessary tool. Most current unsupervised methods for 3D shape analysis use
datasets that are aligned, require objects to be reconstructed and suffer from
deteriorated performance on downstream tasks. To solve these issues, we propose
to extend the InfoMax and contrastive learning principles on 3D shapes. We show
that we can maximize the mutual information between 3D objects and their
"chunks" to improve the representations in aligned datasets. Furthermore, we
can achieve rotation invariance in SO${(3)}$ group by maximizing the mutual
information between the 3D objects and their geometric transformed versions.
Finally, we conduct several experiments such as clustering, transfer learning,
shape retrieval, and achieve state of art results.
Related papers
- Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding [50.448520056844885]
We propose a generative Bayesian network to produce diverse synthetic scenes with real-world patterns.
A series of experiments robustly display our method's consistent superiority over existing state-of-the-art pre-training approaches.
arXiv Detail & Related papers (2024-06-17T07:43:53Z) - UniG3D: A Unified 3D Object Generation Dataset [75.49544172927749]
UniG3D is a unified 3D object generation dataset constructed by employing a universal data transformation pipeline on ShapeNet datasets.
This pipeline converts each raw 3D model into comprehensive multi-modal data representation.
The selection of data sources for our dataset is based on their scale and quality.
arXiv Detail & Related papers (2023-06-19T07:03:45Z) - RandomRooms: Unsupervised Pre-training from Synthetic Shapes and
Randomized Layouts for 3D Object Detection [138.2892824662943]
A promising solution is to make better use of the synthetic dataset, which consists of CAD object models, to boost the learning on real datasets.
Recent work on 3D pre-training exhibits failure when transfer features learned on synthetic objects to other real-world applications.
In this work, we put forward a new method called RandomRooms to accomplish this objective.
arXiv Detail & Related papers (2021-08-17T17:56:12Z) - Learning Feature Aggregation for Deep 3D Morphable Models [57.1266963015401]
We propose an attention based module to learn mapping matrices for better feature aggregation across hierarchical levels.
Our experiments show that through the end-to-end training of the mapping matrices, we achieve state-of-the-art results on a variety of 3D shape datasets.
arXiv Detail & Related papers (2021-05-05T16:41:00Z) - PointContrast: Unsupervised Pre-training for 3D Point Cloud
Understanding [107.02479689909164]
In this work, we aim at facilitating research on 3D representation learning.
We measure the effect of unsupervised pre-training on a large source set of 3D scenes.
arXiv Detail & Related papers (2020-07-21T17:59:22Z) - Implicit Functions in Feature Space for 3D Shape Reconstruction and
Completion [53.885984328273686]
Implicit Feature Networks (IF-Nets) deliver continuous outputs, can handle multiple topologies, and complete shapes for missing or sparse input data.
IF-Nets clearly outperform prior work in 3D object reconstruction in ShapeNet, and obtain significantly more accurate 3D human reconstructions.
arXiv Detail & Related papers (2020-03-03T11:14:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.