Deep Learning on Implicit Neural Representations of Shapes
- URL: http://arxiv.org/abs/2302.05438v1
- Date: Fri, 10 Feb 2023 18:55:49 GMT
- Title: Deep Learning on Implicit Neural Representations of Shapes
- Authors: Luca De Luigi, Adriano Cardace, Riccardo Spezialetti, Pierluigi Zama
Ramirez, Samuele Salti, Luigi Di Stefano
- Abstract summary: Implicit Neural Representations (INRs) have emerged as a powerful tool to encode continuously a variety of different signals.
In this paper, we propose inr2vec, a framework that can compute a compact latent representation for an input INR in a single inference pass.
We verify that inr2vec can embed effectively the 3D shapes represented by the input INRs and show how the produced embeddings can be fed into deep learning pipelines.
- Score: 14.596732196310978
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Implicit Neural Representations (INRs) have emerged in the last few years as
a powerful tool to encode continuously a variety of different signals like
images, videos, audio and 3D shapes. When applied to 3D shapes, INRs allow to
overcome the fragmentation and shortcomings of the popular discrete
representations used so far. Yet, considering that INRs consist in neural
networks, it is not clear whether and how it may be possible to feed them into
deep learning pipelines aimed at solving a downstream task. In this paper, we
put forward this research problem and propose inr2vec, a framework that can
compute a compact latent representation for an input INR in a single inference
pass. We verify that inr2vec can embed effectively the 3D shapes represented by
the input INRs and show how the produced embeddings can be fed into deep
learning pipelines to solve several tasks by processing exclusively INRs.
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - Deep Learning on Object-centric 3D Neural Fields [19.781070751341154]
We introduce nf2vec, a framework capable of generating a compact latent representation for an input NF in a single inference pass.
We demonstrate that nf2vec effectively embeds 3D objects represented by the input NFs and showcase how the resulting embeddings can be employed in deep learning pipelines.
arXiv Detail & Related papers (2023-12-20T18:56:45Z) - Registering Neural Radiance Fields as 3D Density Images [55.64859832225061]
We propose to use universal pre-trained neural networks that can be trained and tested on different scenes.
We demonstrate that our method, as a global approach, can effectively register NeRF models.
arXiv Detail & Related papers (2023-05-22T09:08:46Z) - Revisiting Implicit Neural Representations in Low-Level Vision [20.3578908524788]
Implicit Neural Representation (INR) has been emerging in computer vision in recent years.
We are interested in its effectiveness in low-level vision problems such as image restoration.
In this work, we revisit INR and investigate its application in low-level image restoration tasks.
arXiv Detail & Related papers (2023-04-20T12:19:27Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Signal Processing for Implicit Neural Representations [80.38097216996164]
Implicit Neural Representations (INRs) encode continuous multi-media data via multi-layer perceptrons.
Existing works manipulate such continuous representations via processing on their discretized instance.
We propose an implicit neural signal processing network, dubbed INSP-Net, via differential operators on INR.
arXiv Detail & Related papers (2022-10-17T06:29:07Z) - Sobolev Training for Implicit Neural Representations with Approximated
Image Derivatives [12.71676484494428]
Implicit Neural Representations (INRs) parameterized by neural networks have emerged as a powerful tool to represent different kinds of signals.
We propose a training paradigm for INRs whose target output is image pixels, to encode image derivatives in addition to image values in the neural network.
We show how the training paradigm can be leveraged to solve typical INRs problems, i.e., image regression and inverse rendering.
arXiv Detail & Related papers (2022-07-21T10:12:41Z) - Neural Implicit Dictionary via Mixture-of-Expert Training [111.08941206369508]
We present a generic INR framework that achieves both data and training efficiency by learning a Neural Implicit Dictionary (NID)
Our NID assembles a group of coordinate-based Impworks which are tuned to span the desired function space.
Our experiments show that, NID can improve reconstruction of 2D images or 3D scenes by 2 orders of magnitude faster with up to 98% less input data.
arXiv Detail & Related papers (2022-07-08T05:07:19Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.