Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation
- URL: http://arxiv.org/abs/2105.14250v1
- Date: Sat, 29 May 2021 08:39:57 GMT
- Title: Cherry-Picking Gradients: Learning Low-Rank Embeddings of Visual Data
via Differentiable Cross-Approximation
- Authors: Mikhail Usvyatsov, Anastasia Makarova, Rafael Ballester-Ripoll, Maxim
Rakhuba, Andreas Krause, Konrad Schindler
- Abstract summary: We propose an end-to-end trainable framework that processes large-scale visual data tensors by looking emphat a fraction of their entries only.
The proposed approach is particularly useful for large-scale multidimensional grid data, and for tasks that require context over a large receptive field.
- Score: 53.95297550117153
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose an end-to-end trainable framework that processes large-scale
visual data tensors by looking \emph{at a fraction of their entries only}. Our
method combines a neural network encoder with a \emph{tensor train
decomposition} to learn a low-rank latent encoding, coupled with
cross-approximation (CA) to learn the representation through a subset of the
original samples. CA is an adaptive sampling algorithm that is native to tensor
decompositions and avoids working with the full high-resolution data
explicitly. Instead, it actively selects local representative samples that we
fetch out-of-core and on-demand. The required number of samples grows only
logarithmically with the size of the input. Our implicit representation of the
tensor in the network enables processing large grids that could not be
otherwise tractable in their uncompressed form. The proposed approach is
particularly useful for large-scale multidimensional grid data (e.g., 3D
tomography), and for tasks that require context over a large receptive field
(e.g., predicting the medical condition of entire organs). The code will be
available at https://github.com/aelphy/c-pic
Related papers
- Sampling weights of deep neural networks [1.2370077627846041]
We introduce a probability distribution, combined with an efficient sampling algorithm, for weights and biases of fully-connected neural networks.
In a supervised learning context, no iterative optimization or gradient computations of internal network parameters are needed.
We prove that sampled networks are universal approximators.
arXiv Detail & Related papers (2023-06-29T10:13:36Z) - A Self-Encoder for Learning Nearest Neighbors [5.297261090056809]
The self-encoder learns to distribute the data samples in the embedding space so that they are linearly separable from one another.
Unlike regular nearest neighbors, the predictions resulting from this encoding of data are invariant to any scaling of features.
arXiv Detail & Related papers (2023-06-25T14:30:31Z) - Sampling Enclosing Subgraphs for Link Prediction [2.1270496914042996]
Link prediction is a fundamental problem for graph-structured data computation.
This paper presents a scalable solution, that we call ScaLed, which utilizes sparse enclosing subgraphs to make predictions.
By leveraging the smaller sampled subgraph, ScaLed can scale to larger graphs with much less overhead while maintaining high accuracy.
arXiv Detail & Related papers (2022-06-23T22:48:44Z) - Large-Margin Representation Learning for Texture Classification [67.94823375350433]
This paper presents a novel approach combining convolutional layers (CLs) and large-margin metric learning for training supervised models on small datasets for texture classification.
The experimental results on texture and histopathologic image datasets have shown that the proposed approach achieves competitive accuracy with lower computational cost and faster convergence when compared to equivalent CNNs.
arXiv Detail & Related papers (2022-06-17T04:07:45Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - OSLNet: Deep Small-Sample Classification with an Orthogonal Softmax
Layer [77.90012156266324]
This paper aims to find a subspace of neural networks that can facilitate a large decision margin.
We propose the Orthogonal Softmax Layer (OSL), which makes the weight vectors in the classification layer remain during both the training and test processes.
Experimental results demonstrate that the proposed OSL has better performance than the methods used for comparison on four small-sample benchmark datasets.
arXiv Detail & Related papers (2020-04-20T02:41:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.