Regression Networks For Calculating Englacial Layer Thickness
- URL: http://arxiv.org/abs/2104.04654v1
- Date: Sat, 10 Apr 2021 00:31:32 GMT
- Title: Regression Networks For Calculating Englacial Layer Thickness
- Authors: Debvrat Varshney, Maryam Rahnemoonfar, Masoud Yari, and John Paden
- Abstract summary: We use convolutional neural networks with multiple output nodes to regress and learn the thickness of internal ice layers in Snow Radar images.
With the residual connections of ResNet50, we could achieve a mean absolute error of 1.251 pixels over the test set.
- Score: 1.0499611180329802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ice thickness estimation is an important aspect of ice sheet studies. In this
work, we use convolutional neural networks with multiple output nodes to
regress and learn the thickness of internal ice layers in Snow Radar images
collected in northwest Greenland. We experiment with some state-of-the-art
networks and find that with the residual connections of ResNet50, we could
achieve a mean absolute error of 1.251 pixels over the test set. Such
regression-based networks can further be improved by embedding domain knowledge
and radar information in the neural network in order to reduce the requirement
of manual annotations.
Related papers
- Opening the Black Box: predicting the trainability of deep neural networks with reconstruction entropy [0.0]
We present a method for predicting the trainable regime in parameter space for deep feedforward neural networks.
For both MNIST and CIFAR10, we show that a single epoch of training is sufficient to predict the trainability of the deep feedforward network.
arXiv Detail & Related papers (2024-06-13T18:00:05Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Neural Network Pruning as Spectrum Preserving Process [7.386663473785839]
We identify the close connection between matrix spectrum learning and neural network training for dense and convolutional layers.
We propose a matrix sparsification algorithm tailored for neural network pruning that yields better pruning result.
arXiv Detail & Related papers (2023-07-18T05:39:32Z) - A Dimensionality Reduction Approach for Convolutional Neural Networks [0.0]
We propose a generic methodology to reduce the number of layers of a pre-trained network by combining the aforementioned techniques for dimensionality reduction with input-output mappings.
Our experiment shows that the reduced nets can achieve a level of accuracy similar to the original Convolutional Neural Network under examination, while saving in memory allocation.
arXiv Detail & Related papers (2021-10-18T10:31:12Z) - Backward Gradient Normalization in Deep Neural Networks [68.8204255655161]
We introduce a new technique for gradient normalization during neural network training.
The gradients are rescaled during the backward pass using normalization layers introduced at certain points within the network architecture.
Results on tests with very deep neural networks show that the new technique can do an effective control of the gradient norm.
arXiv Detail & Related papers (2021-06-17T13:24:43Z) - Adder Neural Networks [75.54239599016535]
We present adder networks (AdderNets) to trade massive multiplications in deep neural networks.
In AdderNets, we take the $ell_p$-norm distance between filters and input feature as the output response.
We show that the proposed AdderNets can achieve 75.7% Top-1 accuracy 92.3% Top-5 accuracy using ResNet-50 on the ImageNet dataset.
arXiv Detail & Related papers (2021-05-29T04:02:51Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z) - Lattice Fusion Networks for Image Denoising [4.010371060637209]
A novel method for feature fusion in convolutional neural networks is proposed in this paper.
Some of these techniques as well as the proposed network can be considered a type of Directed Acyclic Graph (DAG) Network.
The proposed network is able to achieve better results with far fewer learnable parameters.
arXiv Detail & Related papers (2020-11-28T18:57:54Z) - Compressive sensing with un-trained neural networks: Gradient descent
finds the smoothest approximation [60.80172153614544]
Un-trained convolutional neural networks have emerged as highly successful tools for image recovery and restoration.
We show that an un-trained convolutional neural network can approximately reconstruct signals and images that are sufficiently structured, from a near minimal number of random measurements.
arXiv Detail & Related papers (2020-05-07T15:57:25Z) - Lifted Regression/Reconstruction Networks [17.89437720094451]
We propose lifted regression/reconstruction networks (LRRNs)
LRRNs combine lifted neural networks with a guaranteed Lipschitz continuity property for the output layer.
We analyse and numerically demonstrate applications for unsupervised and supervised learning.
arXiv Detail & Related papers (2020-05-07T13:24:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.