SimNP: Learning Self-Similarity Priors Between Neural Points
- URL: http://arxiv.org/abs/2309.03809v2
- Date: Fri, 12 Jul 2024 19:45:41 GMT
- Title: SimNP: Learning Self-Similarity Priors Between Neural Points
- Authors: Christopher Wewer, Eddy Ilg, Bernt Schiele, Jan Eric Lenssen,
- Abstract summary: SimNP is a method to learn category-level self-similarities.
We show that SimNP is able to outperform previous methods in reconstructing symmetric unseen object regions.
- Score: 52.4201466988562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing neural field representations for 3D object reconstruction either (1) utilize object-level representations, but suffer from low-quality details due to conditioning on a global latent code, or (2) are able to perfectly reconstruct the observations, but fail to utilize object-level prior knowledge to infer unobserved regions. We present SimNP, a method to learn category-level self-similarities, which combines the advantages of both worlds by connecting neural point radiance fields with a category-level self-similarity representation. Our contribution is two-fold. (1) We design the first neural point representation on a category level by utilizing the concept of coherent point clouds. The resulting neural point radiance fields store a high level of detail for locally supported object regions. (2) We learn how information is shared between neural points in an unconstrained and unsupervised fashion, which allows to derive unobserved regions of an object during the reconstruction process from given observations. We show that SimNP is able to outperform previous methods in reconstructing symmetric unseen object regions, surpassing methods that build upon category-level or pixel-aligned radiance fields, while providing semantic correspondences between instances
Related papers
- Category-level Neural Field for Reconstruction of Partially Observed Objects in Indoor Environment [24.880495520422006]
We introduce category-level neural fields that learn meaningful common 3D information among objects belonging to the same category present in the scene.
Our key idea is to subcategorize objects based on their observed shape for better training of the category-level model.
Experiments on both simulation and real-world datasets demonstrate that our method improves the reconstruction of unobserved parts for several categories.
arXiv Detail & Related papers (2024-06-12T13:09:59Z) - Canonical Fields: Self-Supervised Learning of Pose-Canonicalized Neural
Fields [9.401281193955583]
CaFi-Net is a self-supervised method to canonicalize the 3D pose of instances from an object category represented as neural fields.
During inference, our method takes pre-trained neural radiance fields of novel object instances at arbitrary 3D pose.
Experiments on a new dataset of 1300 NeRF models across 13 object categories show that our method matches or exceeds the performance of 3D point cloud-based methods.
arXiv Detail & Related papers (2022-12-05T18:56:36Z) - Sim2Real Object-Centric Keypoint Detection and Description [40.58367357980036]
Keypoint detection and description play a central role in computer vision.
We propose the object-centric formulation, which requires further identifying which object each interest point belongs to.
We develop a sim2real contrastive learning mechanism that can generalize the model trained in simulation to real-world applications.
arXiv Detail & Related papers (2022-02-01T15:00:20Z) - A singular Riemannian geometry approach to Deep Neural Networks II.
Reconstruction of 1-D equivalence classes [78.120734120667]
We build the preimage of a point in the output manifold in the input space.
We focus for simplicity on the case of neural networks maps from n-dimensional real spaces to (n - 1)-dimensional real spaces.
arXiv Detail & Related papers (2021-12-17T11:47:45Z) - Neural Points: Point Cloud Representation with Neural Fields [31.167929128314096]
We propose emphNeural Points, a novel point cloud representation.
Each point in Neural Points represents a local continuous geometric shape via neural fields.
We show that Neural Points has powerful representation ability and demonstrate excellent robustness and generalization ability.
arXiv Detail & Related papers (2021-12-08T07:34:17Z) - PC-RGNN: Point Cloud Completion and Graph Neural Network for 3D Object
Detection [57.49788100647103]
LiDAR-based 3D object detection is an important task for autonomous driving.
Current approaches suffer from sparse and partial point clouds of distant and occluded objects.
In this paper, we propose a novel two-stage approach, namely PC-RGNN, dealing with such challenges by two specific solutions.
arXiv Detail & Related papers (2020-12-18T18:06:43Z) - Fine-Grained 3D Shape Classification with Hierarchical Part-View
Attentions [70.0171362989609]
We propose a novel fine-grained 3D shape classification method named FG3D-Net to capture the fine-grained local details of 3D shapes from multiple rendered views.
Our results under the fine-grained 3D shape dataset show that our method outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-05-26T06:53:19Z) - Interpretable and Accurate Fine-grained Recognition via Region Grouping [14.28113520947247]
We present an interpretable deep model for fine-grained visual recognition.
At the core of our method lies the integration of region-based part discovery and attribution within a deep neural network.
Our results compare favorably to state-of-the-art methods on classification tasks.
arXiv Detail & Related papers (2020-05-21T01:18:26Z) - Ventral-Dorsal Neural Networks: Object Detection via Selective Attention [51.79577908317031]
We propose a new framework called Ventral-Dorsal Networks (VDNets)
Inspired by the structure of the human visual system, we propose the integration of a "Ventral Network" and a "Dorsal Network"
Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches.
arXiv Detail & Related papers (2020-05-15T23:57:36Z) - Global-Local Bidirectional Reasoning for Unsupervised Representation
Learning of 3D Point Clouds [109.0016923028653]
We learn point cloud representation by bidirectional reasoning between the local structures and the global shape without human supervision.
We show that our unsupervised model surpasses the state-of-the-art supervised methods on both synthetic and real-world 3D object classification datasets.
arXiv Detail & Related papers (2020-03-29T08:26:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.