Implicit Neural Representation of Tileable Material Textures
- URL: http://arxiv.org/abs/2402.02208v1
- Date: Sat, 3 Feb 2024 16:44:25 GMT
- Title: Implicit Neural Representation of Tileable Material Textures
- Authors: Hallison Paz, Tiago Novello, Luiz Velho
- Abstract summary: We explore sinusoidal neural networks to represent periodic tileable textures.
We prove that the compositions of sinusoidal layers generate only integer frequencies with period $P$.
Our proposed neural implicit representation is compact and enables efficient reconstruction of high-resolution textures.
- Score: 1.1203075575217447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore sinusoidal neural networks to represent periodic tileable
textures. Our approach leverages the Fourier series by initializing the first
layer of a sinusoidal neural network with integer frequencies with a period
$P$. We prove that the compositions of sinusoidal layers generate only integer
frequencies with period $P$. As a result, our network learns a continuous
representation of a periodic pattern, enabling direct evaluation at any spatial
coordinate without the need for interpolation. To enforce the resulting pattern
to be tileable, we add a regularization term, based on the Poisson equation, to
the loss function. Our proposed neural implicit representation is compact and
enables efficient reconstruction of high-resolution textures with high visual
fidelity and sharpness across multiple levels of detail. We present
applications of our approach in the domain of anti-aliased surface.
Related papers
- SpaceMesh: A Continuous Representation for Learning Manifold Surface Meshes [61.110517195874074]
We present a scheme to directly generate manifold, polygonal meshes of complex connectivity as the output of a neural network.
Our key innovation is to define a continuous latent connectivity space at each mesh, which implies the discrete mesh.
In applications, this approach not only yields high-quality outputs from generative models, but also enables directly learning challenging geometry processing tasks such as mesh repair.
arXiv Detail & Related papers (2024-09-30T17:59:03Z) - Adaptive Shells for Efficient Neural Radiance Field Rendering [92.18962730460842]
We propose a neural radiance formulation that smoothly transitions between- and surface-based rendering.
Our approach enables efficient rendering at very high fidelity.
We also demonstrate that the extracted envelope enables downstream applications such as animation and simulation.
arXiv Detail & Related papers (2023-11-16T18:58:55Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Explicit Neural Surfaces: Learning Continuous Geometry With Deformation
Fields [33.38609930708073]
We introduce Explicit Neural Surfaces (ENS), an efficient smooth surface representation that encodes topology with a deformation field from a known base domain.
Compared to implicit surfaces, ENS trains faster and has several orders of magnitude faster inference times.
arXiv Detail & Related papers (2023-06-05T15:24:33Z) - A Scalable Walsh-Hadamard Regularizer to Overcome the Low-degree
Spectral Bias of Neural Networks [79.28094304325116]
Despite the capacity of neural nets to learn arbitrary functions, models trained through gradient descent often exhibit a bias towards simpler'' functions.
We show how this spectral bias towards low-degree frequencies can in fact hurt the neural network's generalization on real-world datasets.
We propose a new scalable functional regularization scheme that aids the neural network to learn higher degree frequencies.
arXiv Detail & Related papers (2023-05-16T20:06:01Z) - Neural Fourier Filter Bank [18.52741992605852]
We present a novel method to provide efficient and highly detailed reconstructions.
Inspired by wavelets, we learn a neural field that decompose the signal both spatially and frequency-wise.
arXiv Detail & Related papers (2022-12-04T03:45:08Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - Deep Neural Networks are Surprisingly Reversible: A Baseline for
Zero-Shot Inversion [90.65667807498086]
This paper presents a zero-shot direct model inversion framework that recovers the input to the trained model given only the internal representation.
We empirically show that modern classification models on ImageNet can, surprisingly, be inverted, allowing an approximate recovery of the original 224x224px images from a representation after more than 20 layers.
arXiv Detail & Related papers (2021-07-13T18:01:43Z) - Neural Splines: Fitting 3D Surfaces with Infinitely-Wide Neural Networks [61.07202852469595]
We present Neural Splines, a technique for 3D surface reconstruction that is based on random feature kernels arising from infinitely-wide shallow ReLU networks.
Our method achieves state-of-the-art results, outperforming recent neural network-based techniques and widely used Poisson Surface Reconstruction.
arXiv Detail & Related papers (2020-06-24T14:54:59Z) - Vanishing Point Detection with Direct and Transposed Fast Hough
Transform inside the neural network [0.0]
In this paper, we suggest a new neural network architecture for vanishing point detection in images.
The key element is the use of the direct and transposed Fast Hough Transforms separated by convolutional layer blocks with standard activation functions.
arXiv Detail & Related papers (2020-02-04T09:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.