Surfel-based 3D Registration with Equivariant SE(3) Features
- URL: http://arxiv.org/abs/2508.20789v1
- Date: Thu, 28 Aug 2025 13:53:44 GMT
- Title: Surfel-based 3D Registration with Equivariant SE(3) Features
- Authors: Xueyang Kang, Hang Zhao, Kourosh Khoshelham, Patrick Vandewalle,
- Abstract summary: Point cloud registration is crucial for ensuring 3D alignment consistency of multiple local point clouds in 3D reconstruction for remote sensing or digital heritage.<n>We propose a novel surfel-based pose learning regression approach to address these issues.<n>Our method can initialize surfels from Lidar point cloud using virtual perspective camera parameters, and learns explicit $mathbfSE(3)$ equivariant features.
- Score: 34.796697445601914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud registration is crucial for ensuring 3D alignment consistency of multiple local point clouds in 3D reconstruction for remote sensing or digital heritage. While various point cloud-based registration methods exist, both non-learning and learning-based, they ignore point orientations and point uncertainties, making the model susceptible to noisy input and aggressive rotations of the input point cloud like orthogonal transformation; thus, it necessitates extensive training point clouds with transformation augmentations. To address these issues, we propose a novel surfel-based pose learning regression approach. Our method can initialize surfels from Lidar point cloud using virtual perspective camera parameters, and learns explicit $\mathbf{SE(3)}$ equivariant features, including both position and rotation through $\mathbf{SE(3)}$ equivariant convolutional kernels to predict relative transformation between source and target scans. The model comprises an equivariant convolutional encoder, a cross-attention mechanism for similarity computation, a fully-connected decoder, and a non-linear Huber loss. Experimental results on indoor and outdoor datasets demonstrate our model superiority and robust performance on real point-cloud scans compared to state-of-the-art methods.
Related papers
- A Lightweight 3D Anomaly Detection Method with Rotationally Invariant Features [60.76577388438418]
3D anomaly detection (AD) is a crucial task in computer vision, aiming to identify anomalous points or regions from point cloud data.<n>Existing methods may encounter challenges when handling point clouds with changes in orientation and position because the resulting features may vary significantly.<n>We propose a novel Rotationally Invariant Features (RIF) framework for 3D AD, which maps each point into a rotationally invariant space to maintain consistency of representation.
arXiv Detail & Related papers (2025-11-17T08:16:05Z) - Adaptive Point-Prompt Tuning: Fine-Tuning Heterogeneous Foundation Models for 3D Point Cloud Analysis [51.37795317716487]
We propose the Adaptive Point-Prompt Tuning (APPT) method, which fine-tunes pre-trained models with a modest number of parameters.<n>We convert raw point clouds into point embeddings by aggregating local geometry to capture spatial features followed by linear layers.<n>To calibrate self-attention across source domains of any modality to 3D, we introduce a prompt generator that shares weights with the point embedding module.
arXiv Detail & Related papers (2025-08-30T06:02:21Z) - 3D Point Cloud Generation via Autoregressive Up-sampling [60.05226063558296]
We introduce a pioneering autoregressive generative model for 3D point cloud generation.<n>Inspired by visual autoregressive modeling, we conceptualize point cloud generation as an autoregressive up-sampling process.<n>PointARU progressively refines 3D point clouds from coarse to fine scales.
arXiv Detail & Related papers (2025-03-11T16:30:45Z) - Fully-Geometric Cross-Attention for Point Cloud Registration [51.865371511201765]
Point cloud registration approaches often fail when the overlap between point clouds is low due to noisy point correspondences.<n>This work introduces a novel cross-attention mechanism tailored for Transformer-based architectures that tackles this problem.<n>We integrate the Gromov-Wasserstein distance into the cross-attention formulation to jointly compute distances between points across different point clouds.<n>At the point level, we also devise a self-attention mechanism that aggregates the local geometric structure information into point features for fine matching.
arXiv Detail & Related papers (2025-02-12T10:44:36Z) - Equi-GSPR: Equivariant SE(3) Graph Network Model for Sparse Point Cloud Registration [2.814748676983944]
We propose a graph neural network model embedded with a local Spherical Euclidean 3D equivariance property through SE(3) message passing based propagation.
Our model is composed mainly of a descriptor module, equivariant graph layers, match similarity, and the final regression layers.
Experiments conducted on the 3DMatch and KITTI datasets exhibit the compelling and robust performance of our model compared to state-of-the-art approaches.
arXiv Detail & Related papers (2024-10-08T06:48:01Z) - PointDifformer: Robust Point Cloud Registration With Neural Diffusion and Transformer [31.02661827570958]
Point cloud registration is a fundamental technique in 3-D computer vision with applications in graphics, autonomous driving, and robotics.
We propose a robust point cloud registration approach that leverages graph neural partial differential equations (PDEs) and heat kernel signatures.
Empirical experiments on a 3-D point cloud dataset demonstrate that our approach not only achieves state-of-the-art performance for point cloud registration but also exhibits better robustness to additive noise or 3-D shape perturbations.
arXiv Detail & Related papers (2024-04-22T09:50:12Z) - Self-supervised Learning of LiDAR 3D Point Clouds via 2D-3D Neural Calibration [107.61458720202984]
This paper introduces a novel self-supervised learning framework for enhancing 3D perception in autonomous driving scenes.<n>We propose the learnable transformation alignment to bridge the domain gap between image and point cloud data.<n>We establish dense 2D-3D correspondences to estimate the rigid pose.
arXiv Detail & Related papers (2024-01-23T02:41:06Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - A Representation Separation Perspective to Correspondences-free
Unsupervised 3D Point Cloud Registration [40.12490804387776]
3D point cloud registration in remote sensing field has been greatly advanced by deep learning based methods.
We propose a correspondences-free unsupervised point cloud registration (UPCR) method from the representation separation perspective.
Our method not only filters out the disturbance in pose-invariant representation but also is robust to partial-to-partial point clouds or noise.
arXiv Detail & Related papers (2022-03-24T17:50:19Z) - RIConv++: Effective Rotation Invariant Convolutions for 3D Point Clouds
Deep Learning [32.18566879365623]
3D point clouds deep learning is a promising field of research that allows a neural network to learn features of point clouds directly.
We propose a simple yet effective convolution operator that enhances feature distinction by designing powerful rotation invariant features from the local regions.
Our network architecture can capture both local and global context by simply tuning the neighborhood size in each convolution layer.
arXiv Detail & Related papers (2022-02-26T08:32:44Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.