uSF: Learning Neural Semantic Field with Uncertainty
- URL: http://arxiv.org/abs/2312.08012v2
- Date: Mon, 10 Jun 2024 00:22:46 GMT
- Title: uSF: Learning Neural Semantic Field with Uncertainty
- Authors: Vsevolod Skorokhodov, Darya Drozdova, Dmitry Yudin,
- Abstract summary: We propose a new neural network model for the formation of extended vector representations, called uSF.
We show that with a small number of images available for training, a model quantifying uncertainty performs better than a model without such functionality.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, there has been an increased interest in NeRF methods which reconstruct differentiable representation of three-dimensional scenes. One of the main limitations of such methods is their inability to assess the confidence of the model in its predictions. In this paper, we propose a new neural network model for the formation of extended vector representations, called uSF, which allows the model to predict not only color and semantic label of each point, but also estimate the corresponding values of uncertainty. We show that with a small number of images available for training, a model quantifying uncertainty performs better than a model without such functionality. Code of the uSF approach is publicly available at https://github.com/sevashasla/usf/.
Related papers
- Derivative-Free Diffusion Manifold-Constrained Gradient for Unified XAI [59.96044730204345]
We introduce Derivative-Free Diffusion Manifold-Constrainted Gradients (FreeMCG)
FreeMCG serves as an improved basis for explainability of a given neural network.
We show that our method yields state-of-the-art results while preserving the essential properties expected of XAI tools.
arXiv Detail & Related papers (2024-11-22T11:15:14Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - ProvNeRF: Modeling per Point Provenance in NeRFs as a Stochastic Field [52.09661042881063]
We propose an approach that models the bfprovenance for each point -- i.e., the locations where it is likely visible -- of NeRFs as a text field.
We show that modeling per-point provenance during the NeRF optimization enriches the model with information on leading to improvements in novel view synthesis and uncertainty estimation.
arXiv Detail & Related papers (2024-01-16T06:19:18Z) - Manipulating Feature Visualizations with Gradient Slingshots [54.31109240020007]
We introduce a novel method for manipulating Feature Visualization (FV) without significantly impacting the model's decision-making process.
We evaluate the effectiveness of our method on several neural network models and demonstrate its capabilities to hide the functionality of arbitrarily chosen neurons.
arXiv Detail & Related papers (2024-01-11T18:57:17Z) - A Probabilistic Attention Model with Occlusion-aware Texture Regression
for 3D Hand Reconstruction from a Single RGB Image [5.725477071353354]
Deep learning approaches have shown promising results in 3D hand reconstruction from a single RGB image.
We propose a novel probabilistic model to achieve the robustness of model-based approaches.
We demonstrate the flexibility of the proposed probabilistic model to be trained in both supervised and weakly-supervised scenarios.
arXiv Detail & Related papers (2023-04-27T16:02:32Z) - Formalising the Robustness of Counterfactual Explanations for Neural
Networks [16.39168719476438]
We introduce an abstraction framework based on interval neural networks to verify the robustness of CFXs.
We show how embedding Delta-robustness within existing methods can provide CFXs which are provably robust.
arXiv Detail & Related papers (2022-08-31T14:11:23Z) - Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty
Quantification [44.598503284186336]
Conditional-Flow NeRF (CF-NeRF) is a novel probabilistic framework to incorporate uncertainty quantification into NeRF-based approaches.
CF-NeRF learns a distribution over all possible radiance fields modelling which is used to quantify the uncertainty associated with the modelled scene.
arXiv Detail & Related papers (2022-03-18T23:26:20Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Stochastic Neural Radiance Fields:Quantifying Uncertainty in Implicit 3D
Representations [19.6329380710514]
Uncertainty quantification is a long-standing problem in Machine Learning.
We propose Neural Radiance Fields (S-NeRF), a generalization of standard NeRF that learns a probability distribution over all the possible fields modeling the scene.
S-NeRF is able to provide more reliable predictions and confidence values than generic approaches previously proposed for uncertainty estimation in other domains.
arXiv Detail & Related papers (2021-09-05T16:56:43Z) - Probabilistic Modeling for Human Mesh Recovery [73.11532990173441]
This paper focuses on the problem of 3D human reconstruction from 2D evidence.
We recast the problem as learning a mapping from the input to a distribution of plausible 3D poses.
arXiv Detail & Related papers (2021-08-26T17:55:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.