CLR-GAM: Contrastive Point Cloud Learning with Guided Augmentation and
Feature Mapping
- URL: http://arxiv.org/abs/2302.14306v1
- Date: Tue, 28 Feb 2023 04:38:52 GMT
- Title: CLR-GAM: Contrastive Point Cloud Learning with Guided Augmentation and
Feature Mapping
- Authors: Srikanth Malla, Yi-Ting Chen
- Abstract summary: We present CLR-GAM, a contrastive learning-based framework with Guided Augmentation (GA) for efficient dynamic exploration strategy.
We empirically demonstrate that the proposed approach achieves state-of-the-art performance on both simulated and real-world 3D point cloud datasets.
- Score: 12.679625717350113
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Point cloud data plays an essential role in robotics and self-driving
applications. Yet, annotating point cloud data is time-consuming and nontrivial
while they enable learning discriminative 3D representations that empower
downstream tasks, such as classification and segmentation. Recently,
contrastive learning-based frameworks have shown promising results for learning
3D representations in a self-supervised manner. However, existing contrastive
learning methods cannot precisely encode and associate structural features and
search the higher dimensional augmentation space efficiently. In this paper, we
present CLR-GAM, a novel contrastive learning-based framework with Guided
Augmentation (GA) for efficient dynamic exploration strategy and Guided Feature
Mapping (GFM) for similar structural feature association between augmented
point clouds. We empirically demonstrate that the proposed approach achieves
state-of-the-art performance on both simulated and real-world 3D point cloud
datasets for three different downstream tasks, i.e., 3D point cloud
classification, few-shot learning, and object part segmentation.
Related papers
- GS-PT: Exploiting 3D Gaussian Splatting for Comprehensive Point Cloud Understanding via Self-supervised Learning [15.559369116540097]
Self-supervised learning of point cloud aims to leverage unlabeled 3D data to learn meaningful representations without reliance on manual annotations.
We propose GS-PT, which integrates 3D Gaussian Splatting (3DGS) into point cloud self-supervised learning for the first time.
Our pipeline utilizes transformers as the backbone for self-supervised pre-training and introduces novel contrastive learning tasks through 3DGS.
arXiv Detail & Related papers (2024-09-08T03:46:47Z) - Dynamic 3D Point Cloud Sequences as 2D Videos [81.46246338686478]
3D point cloud sequences serve as one of the most common and practical representation modalities of real-world environments.
We propose a novel generic representation called textitStructured Point Cloud Videos (SPCVs)
SPCVs re-organizes a point cloud sequence as a 2D video with spatial smoothness and temporal consistency, where the pixel values correspond to the 3D coordinates of points.
arXiv Detail & Related papers (2024-03-02T08:18:57Z) - Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.
We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.
We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - Edge Aware Learning for 3D Point Cloud [8.12405696290333]
This paper proposes an innovative approach to Hierarchical Edge Aware 3D Point Cloud Learning (HEA-Net)
It seeks to address the challenges of noise in point cloud data, and improve object recognition and segmentation by focusing on edge features.
We present an innovative edge-aware learning methodology, specifically designed to enhance point cloud classification and segmentation.
arXiv Detail & Related papers (2023-09-23T20:12:32Z) - Background-Aware 3D Point Cloud Segmentationwith Dynamic Point Feature
Aggregation [12.093182949686781]
We propose a novel 3D point cloud learning network, referred to as Dynamic Point Feature Aggregation Network (DPFA-Net)
DPFA-Net has two variants for semantic segmentation and classification of 3D point clouds.
It achieves the state-of-the-art overall accuracy score for semantic segmentation on the S3DIS dataset.
arXiv Detail & Related papers (2021-11-14T05:46:05Z) - Unsupervised Representation Learning for 3D Point Cloud Data [66.92077180228634]
We propose a simple yet effective approach for unsupervised point cloud learning.
In particular, we identify a very useful transformation which generates a good contrastive version of an original point cloud.
We conduct experiments on three downstream tasks which are 3D object classification, shape part segmentation and scene segmentation.
arXiv Detail & Related papers (2021-10-13T10:52:45Z) - Spatio-temporal Self-Supervised Representation Learning for 3D Point
Clouds [96.9027094562957]
We introduce a-temporal representation learning framework, capable of learning from unlabeled tasks.
Inspired by how infants learn from visual data in the wild, we explore rich cues derived from the 3D data.
STRL takes two temporally-related frames from a 3D point cloud sequence as the input, transforms it with the spatial data augmentation, and learns the invariant representation self-supervisedly.
arXiv Detail & Related papers (2021-09-01T04:17:11Z) - Improving Point Cloud Semantic Segmentation by Learning 3D Object
Detection [102.62963605429508]
Point cloud semantic segmentation plays an essential role in autonomous driving.
Current 3D semantic segmentation networks focus on convolutional architectures that perform great for well represented classes.
We propose a novel Aware 3D Semantic Detection (DASS) framework that explicitly leverages localization features from an auxiliary 3D object detection task.
arXiv Detail & Related papers (2020-09-22T14:17:40Z) - Campus3D: A Photogrammetry Point Cloud Benchmark for Hierarchical
Understanding of Outdoor Scene [76.4183572058063]
We present a richly-annotated 3D point cloud dataset for multiple outdoor scene understanding tasks.
The dataset has been point-wisely annotated with both hierarchical and instance-based labels.
We formulate a hierarchical learning problem for 3D point cloud segmentation and propose a measurement evaluating consistency across various hierarchies.
arXiv Detail & Related papers (2020-08-11T19:10:32Z) - Self-supervised Learning of Point Clouds via Orientation Estimation [19.31778462735251]
We leverage 3D self-supervision for learning downstream tasks on point clouds with fewer labels.
A point cloud can be rotated in infinitely many ways, which provides a rich label-free source for self-supervision.
arXiv Detail & Related papers (2020-08-01T17:49:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.