NoiseSDF2NoiseSDF: Learning Clean Neural Fields from Noisy Supervision
- URL: http://arxiv.org/abs/2507.13595v1
- Date: Fri, 18 Jul 2025 00:58:42 GMT
- Title: NoiseSDF2NoiseSDF: Learning Clean Neural Fields from Noisy Supervision
- Authors: Tengkai Wang, Weihao Li, Ruikai Cui, Shi Qiu, Nick Barnes,
- Abstract summary: NoiseSDF2NoiseSDF is inspired by the Noise2Noise paradigm for 2D images.<n>Our approach enables learning clean neural SDFs directly from noisy point clouds through noisy supervision.<n>We evaluate the effectiveness of NoiseSDF2NoiseSDF on benchmarks, including the ShapeNet, ABC, Famous, and Real datasets.
- Score: 29.073960248163306
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reconstructing accurate implicit surface representations from point clouds remains a challenging task, particularly when data is captured using low-quality scanning devices. These point clouds often contain substantial noise, leading to inaccurate surface reconstructions. Inspired by the Noise2Noise paradigm for 2D images, we introduce NoiseSDF2NoiseSDF, a novel method designed to extend this concept to 3D neural fields. Our approach enables learning clean neural SDFs directly from noisy point clouds through noisy supervision by minimizing the MSE loss between noisy SDF representations, allowing the network to implicitly denoise and refine surface estimations. We evaluate the effectiveness of NoiseSDF2NoiseSDF on benchmarks, including the ShapeNet, ABC, Famous, and Real datasets. Experimental results demonstrate that our framework significantly improves surface reconstruction quality from noisy inputs.
Related papers
- Fast Learning of Signed Distance Functions from Noisy Point Clouds via Noise to Noise Mapping [54.38209327518066]
Learning signed distance functions from point clouds is an important task in 3D computer vision.
We propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision.
Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy observations.
arXiv Detail & Related papers (2024-07-04T03:35:02Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Learning Signed Distance Functions from Noisy 3D Point Clouds via Noise
to Noise Mapping [52.25114448281418]
Learning signed distance functions (SDFs) from 3D point clouds is an important task in 3D computer vision.
We propose to learn SDFs via a noise to noise mapping, which does not require any clean point cloud or ground truth supervision for training.
Our novelty lies in the noise to noise mapping which can infer a highly accurate SDF of a single object or scene from its multiple or even single noisy point cloud observations.
arXiv Detail & Related papers (2023-06-02T09:52:04Z) - Semantic Scene Completion with Cleaner Self [93.99441599791275]
Semantic Scene Completion (SSC) transforms an image of single-view depth and/or RGB 2D pixels into 3D voxels, each of whose semantic labels are predicted.
SSC is a well-known ill-posed problem as the prediction model has to "imagine" what is behind the visible surface, which is usually represented by Truncated Signed Distance Function (TSDF)
We use the ground-truth 3D voxels to generate a perfect visible surface, called TSDF-CAD, and then train a "cleaner" SSC model.
As the model is noise-free, it is expected to
arXiv Detail & Related papers (2023-03-17T13:50:18Z) - CLONeR: Camera-Lidar Fusion for Occupancy Grid-aided Neural
Representations [77.90883737693325]
This paper proposes CLONeR, which significantly improves upon NeRF by allowing it to model large outdoor driving scenes observed from sparse input sensor views.
This is achieved by decoupling occupancy and color learning within the NeRF framework into separate Multi-Layer Perceptrons (MLPs) trained using LiDAR and camera data, respectively.
In addition, this paper proposes a novel method to build differentiable 3D Occupancy Grid Maps (OGM) alongside the NeRF model, and leverage this occupancy grid for improved sampling of points along a ray for rendering in metric space.
arXiv Detail & Related papers (2022-09-02T17:44:50Z) - Neural-IMLS: Self-supervised Implicit Moving Least-Squares Network for
Surface Reconstruction [42.00765652948473]
We introduce Neural-IMLS, a novel approach that directly learns the noise-resistant signed distance function (SDF) from raw point clouds.
We also prove that at the convergence, our neural network, benefiting from the mutual learning mechanism between the IMLS and the SDF, produces a faithful SDF whose zero-level set approximates the underlying surface.
arXiv Detail & Related papers (2021-09-09T16:37:01Z) - Differentiable Manifold Reconstruction for Point Cloud Denoising [23.33652755967715]
3D point clouds are often perturbed by noise due to the inherent limitation of acquisition equipments.
We propose to learn the underlying manifold of a noisy point cloud from differentiably subsampled points.
We show that our method significantly outperforms state-of-the-art denoising methods under both synthetic noise and real world noise.
arXiv Detail & Related papers (2020-07-27T13:31:41Z) - Non-Local Part-Aware Point Cloud Denoising [55.50360085086123]
This paper presents a novel non-local part-aware deep neural network to denoise point clouds.
We design the non-local learning unit (NLU) customized with a graph attention module to adaptively capture non-local semantically-related features.
To enhance the denoising performance, we cascade a series of NLUs to progressively distill the noise features from the noisy inputs.
arXiv Detail & Related papers (2020-03-14T13:51:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.