Neural-IMLS: Self-supervised Implicit Moving Least-Squares Network for
Surface Reconstruction
- URL: http://arxiv.org/abs/2109.04398v4
- Date: Wed, 6 Sep 2023 06:47:49 GMT
- Title: Neural-IMLS: Self-supervised Implicit Moving Least-Squares Network for
Surface Reconstruction
- Authors: Zixiong Wang, Pengfei Wang, Pengshuai Wang, Qiujie Dong, Junjie Gao,
Shuangmin Chen, Shiqing Xin, Changhe Tu, Wenping Wang
- Abstract summary: We introduce Neural-IMLS, a novel approach that directly learns the noise-resistant signed distance function (SDF) from raw point clouds.
We also prove that at the convergence, our neural network, benefiting from the mutual learning mechanism between the IMLS and the SDF, produces a faithful SDF whose zero-level set approximates the underlying surface.
- Score: 42.00765652948473
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Surface reconstruction is very challenging when the input point clouds,
particularly real scans, are noisy and lack normals. Observing that the
Multilayer Perceptron (MLP) and the implicit moving least-square function
(IMLS) provide a dual representation of the underlying surface, we introduce
Neural-IMLS, a novel approach that directly learns the noise-resistant signed
distance function (SDF) from unoriented raw point clouds in a self-supervised
fashion. We use the IMLS to regularize the distance values reported by the MLP
while using the MLP to regularize the normals of the data points for running
the IMLS. We also prove that at the convergence, our neural network, benefiting
from the mutual learning mechanism between the MLP and the IMLS, produces a
faithful SDF whose zero-level set approximates the underlying surface. We
conducted extensive experiments on various benchmarks, including synthetic
scans and real scans. The experimental results show that {\em Neural-IMLS} can
reconstruct faithful shapes on various benchmarks with noise and missing parts.
The source code can be found at~\url{https://github.com/bearprin/Neural-IMLS}.
Related papers
- From MLP to NeoMLP: Leveraging Self-Attention for Neural Fields [26.659511924272962]
We develop a new type of connectionism based on hidden and scalable nodes, called NeoMLP.<n>We demonstrate the effectiveness of our method by fitting high-resolution signals, including multi-modal audio-visual data.
arXiv Detail & Related papers (2024-12-11T19:01:38Z) - SL$^{2}$A-INR: Single-Layer Learnable Activation for Implicit Neural Representation [6.572456394600755]
Implicit Neural Representation (INR) leveraging a neural network to transform coordinate input into corresponding attributes has driven significant advances in vision-related domains.
We show that these challenges can be alleviated by introducing a novel approach in INR architecture.
Specifically, we propose SL$2$A-INR, a hybrid network that combines a single-layer learnable activation function with an synthesis that uses traditional ReLU activations.
arXiv Detail & Related papers (2024-09-17T02:02:15Z) - Coordinate-Aware Modulation for Neural Fields [11.844561374381575]
We propose a novel way for exploiting both synthesiss and grid representations in neural fields.
We suggest a Neural Coordinate-Aware Modulation (CAM), which modulates the parameters using scale and shift features extracted from the grid representations.
arXiv Detail & Related papers (2023-11-25T10:42:51Z) - MLP-SRGAN: A Single-Dimension Super Resolution GAN using MLP-Mixer [0.05219568203653523]
We propose a novel architecture calledversa-SRGAN, which is a single-dimension Super Resolution Generative Adrial Network (SRGAN)
SRGAN is trained and validated using high resolution (HR) FLAIR MRI from the MSSEG2 challenge dataset.
Results show-SRGAN results in sharper edges, less blurring, preserves more texture and fine-anatomical detail, with fewer parameters, faster training/evaluation time, and smaller model size than existing methods.
arXiv Detail & Related papers (2023-03-11T04:05:57Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble [71.97020373520922]
Sign language is commonly used by deaf or mute people to communicate.
We propose a novel Multi-modal Framework with a Global Ensemble Model (GEM) for isolated Sign Language Recognition ( SLR)
Our proposed SAM- SLR-v2 framework is exceedingly effective and achieves state-of-the-art performance with significant margins.
arXiv Detail & Related papers (2021-10-12T16:57:18Z) - A journey in ESN and LSTM visualisations on a language task [77.34726150561087]
We trained ESNs and LSTMs on a Cross-Situationnal Learning (CSL) task.
The results are of three kinds: performance comparison, internal dynamics analyses and visualization of latent space.
arXiv Detail & Related papers (2020-12-03T08:32:01Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.