TT-NF: Tensor Train Neural Fields
- URL: http://arxiv.org/abs/2209.15529v1
- Date: Fri, 30 Sep 2022 15:17:39 GMT
- Title: TT-NF: Tensor Train Neural Fields
- Authors: Anton Obukhov, Mikhail Usvyatsov, Christos Sakaridis, Konrad
Schindler, Luc Van Gool
- Abstract summary: We introduce a novel low-rank representation termed Train Neural Fields (TT-NF) for learning fields on regular grids.
We analyze the effect of low-rank compression on the downstream task quality metrics.
- Score: 88.49847274083365
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning neural fields has been an active topic in deep learning research,
focusing, among other issues, on finding more compact and easy-to-fit
representations. In this paper, we introduce a novel low-rank representation
termed Tensor Train Neural Fields (TT-NF) for learning neural fields on dense
regular grids and efficient methods for sampling from them. Our representation
is a TT parameterization of the neural field, trained with backpropagation to
minimize a non-convex objective. We analyze the effect of low-rank compression
on the downstream task quality metrics in two settings. First, we demonstrate
the efficiency of our method in a sandbox task of tensor denoising, which
admits comparison with SVD-based schemes designed to minimize reconstruction
error. Furthermore, we apply the proposed approach to Neural Radiance Fields,
where the low-rank structure of the field corresponding to the best quality can
be discovered only through learning.
Related papers
- Reg-NF: Efficient Registration of Implicit Surfaces within Neural Fields [6.949522577812908]
We present Reg-NF, a neural fields-based registration that optimises for the relative 6-DoF transformation between two arbitrary neural fields.
Key components of Reg-NF include a bidirectional registration loss, multi-view surface sampling, and utilisation of volumetric signed distance functions.
arXiv Detail & Related papers (2024-02-15T05:31:03Z) - Sparse Multitask Learning for Efficient Neural Representation of Motor
Imagery and Execution [30.186917337606477]
We introduce a sparse multitask learning framework for motor imagery (MI) and motor execution (ME) tasks.
Given a dual-task CNN model for MI-ME classification, we apply a saliency-based sparsification approach to prune superfluous connections.
Our results indicate that this tailored sparsity can mitigate the overfitting problem and improve the test performance with small amount of data.
arXiv Detail & Related papers (2023-12-10T09:06:16Z) - Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain
State Decoding [0.0]
We propose a two-stage computational framework combining Hopfield Networks for artifact data preprocessing with Conal Neural Networks (CNNs) for classification of brain states in rat neural recordings under different levels of anesthesia.
Performance across various levels of data compression and noise intensities showed that our framework can effectively mitigate artifacts, allowing the model to reach parity with the clean-data CNN at lower noise levels.
arXiv Detail & Related papers (2023-11-06T15:08:13Z) - Neural Modulation Fields for Conditional Cone Beam Neural Tomography [18.721488634071193]
Cone Beam Neural Tomography (CondCBNT) shows improved performance for both high and low numbers of available projections on noise-free and noisy data.
We propose a novel conditioning method where local modulations are modeled per patient as a field over the input domain through a Neural Modulation Field (NMF)
arXiv Detail & Related papers (2023-07-17T09:41:01Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Training neural network ensembles via trajectory sampling [0.0]
In machine learning, there is renewed interest in neural network ensembles (NNEs)
We show how to define and train a NNE using the study of rare trajectories in systems.
We demonstrate the viability of this technique on a range of simple supervised learning tasks.
arXiv Detail & Related papers (2022-09-22T15:59:33Z) - Random Features for the Neural Tangent Kernel [57.132634274795066]
We propose an efficient feature map construction of the Neural Tangent Kernel (NTK) of fully-connected ReLU network.
We show that dimension of the resulting features is much smaller than other baseline feature map constructions to achieve comparable error bounds both in theory and practice.
arXiv Detail & Related papers (2021-04-03T09:08:12Z) - Attribute-Guided Adversarial Training for Robustness to Natural
Perturbations [64.35805267250682]
We propose an adversarial training approach which learns to generate new samples so as to maximize exposure of the classifier to the attributes-space.
Our approach enables deep neural networks to be robust against a wide range of naturally occurring perturbations.
arXiv Detail & Related papers (2020-12-03T10:17:30Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.