Shape-Tailored Deep Neural Networks
- URL: http://arxiv.org/abs/2102.08497v1
- Date: Tue, 16 Feb 2021 23:32:14 GMT
- Title: Shape-Tailored Deep Neural Networks
- Authors: Naeemullah Khan, Angira Sharma, Ganesh Sundaramoorthi, Philip H. S.
Torr
- Abstract summary: We present Shape-Tailored Deep Neural Networks (ST-DNN)
ST-DNN extend convolutional networks (CNN), which aggregate data from fixed shape (square) neighborhoods, to compute descriptors defined on arbitrarily shaped regions.
We show that ST-DNN are 3-4 orders of magnitude smaller then CNNs used for segmentation.
- Score: 87.55487474723994
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present Shape-Tailored Deep Neural Networks (ST-DNN). ST-DNN extend
convolutional networks (CNN), which aggregate data from fixed shape (square)
neighborhoods, to compute descriptors defined on arbitrarily shaped regions.
This is natural for segmentation, where descriptors should describe regions
(e.g., of objects) that have diverse shape. We formulate these descriptors
through the Poisson partial differential equation (PDE), which can be used to
generalize convolution to arbitrary regions. We stack multiple PDE layers to
generalize a deep CNN to arbitrary regions, and apply it to segmentation. We
show that ST-DNN are covariant to translations and rotations and robust to
domain deformations, natural for segmentation, which existing CNN based methods
lack. ST-DNN are 3-4 orders of magnitude smaller then CNNs used for
segmentation. We show that they exceed segmentation performance compared to
state-of-the-art CNN-based descriptors using 2-3 orders smaller training sets
on the texture segmentation problem.
Related papers
- Model Parallel Training and Transfer Learning for Convolutional Neural Networks by Domain Decomposition [0.0]
Deep convolutional neural networks (CNNs) have been shown to be very successful in a wide range of image processing applications.
Due to their increasing number of model parameters and an increasing availability of large amounts of training data, parallelization strategies to efficiently train complex CNNs are necessary.
arXiv Detail & Related papers (2024-08-26T17:35:01Z) - What Can Be Learnt With Wide Convolutional Neural Networks? [69.55323565255631]
We study infinitely-wide deep CNNs in the kernel regime.
We prove that deep CNNs adapt to the spatial scale of the target function.
We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN.
arXiv Detail & Related papers (2022-08-01T17:19:32Z) - Towards a General Purpose CNN for Long Range Dependencies in
$\mathrm{N}$D [49.57261544331683]
We propose a single CNN architecture equipped with continuous convolutional kernels for tasks on arbitrary resolution, dimensionality and length without structural changes.
We show the generality of our approach by applying the same CCNN to a wide set of tasks on sequential (1$mathrmD$) and visual data (2$mathrmD$)
Our CCNN performs competitively and often outperforms the current state-of-the-art across all tasks considered.
arXiv Detail & Related papers (2022-06-07T15:48:02Z) - LatticeNet: Fast Spatio-Temporal Point Cloud Segmentation Using
Permutohedral Lattices [27.048998326468688]
Deep convolutional neural networks (CNNs) have shown outstanding performance in the task of semantically segmenting images.
Here, we propose LatticeNet, a novel approach for 3D semantic segmentation, which takes raw point clouds as input.
We present results of 3D segmentation on multiple datasets where our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-08-09T10:17:27Z) - LGNN: A Context-aware Line Segment Detector [53.424521592941936]
We present a novel real-time line segment detection scheme called Line Graph Neural Network (LGNN)
Our LGNN employs a deep convolutional neural network (DCNN) for proposing line segment directly, with a graph neural network (GNN) module for reasoning their connectivities.
Compared with the state-of-the-art, LGNN achieves near real-time performance without compromising accuracy.
arXiv Detail & Related papers (2020-08-13T13:23:18Z) - CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical
Convolution Layers [2.4316550366482357]
This paper proposes a deep CNN model, called CyCNN, which exploits polar mapping of input images to convert rotation to translation.
A CyConv layer exploits the cylindrically sliding windows (CSW) mechanism that vertically extends the input-image receptive fields of boundary units in a convolutional layer.
We show that if there is no data augmentation during training, CyCNN significantly improves classification accuracies when compared to conventional CNN models.
arXiv Detail & Related papers (2020-07-21T04:05:35Z) - Implicit Convex Regularizers of CNN Architectures: Convex Optimization
of Two- and Three-Layer Networks in Polynomial Time [70.15611146583068]
We study training of Convolutional Neural Networks (CNNs) with ReLU activations.
We introduce exact convex optimization with a complexity with respect to the number of data samples, the number of neurons, and data dimension.
arXiv Detail & Related papers (2020-06-26T04:47:20Z) - On the Number of Linear Regions of Convolutional Neural Networks [0.6206641883102021]
Deep CNNs have more powerful expressivity than their shallow counterparts, while CNNs have more expressivity than fully-connected NNs per parameter.
Our results suggest that deeper CNNs have more powerful expressivity than their shallow counterparts, while CNNs have more expressivity than fully-connected NNs per parameter.
arXiv Detail & Related papers (2020-06-01T14:38:05Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.