Fully reversible neural networks for large-scale surface and sub-surface
characterization via remote sensing
- URL: http://arxiv.org/abs/2003.07474v1
- Date: Mon, 16 Mar 2020 23:54:22 GMT
- Title: Fully reversible neural networks for large-scale surface and sub-surface
characterization via remote sensing
- Authors: Bas Peters, Eldad Haber, Keegan Lensink
- Abstract summary: Large spatial/frequency scale of hyperspectral and airborne magnetic and gravitational data causes memory issues when using convolutional neural networks for (sub-) surface characterization.
We show examples from land-use change detection from hyperspectral time-lapse data, and regional aquifer mapping from airborne geophysical and geological data.
- Score: 4.383011485317949
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The large spatial/frequency scale of hyperspectral and airborne magnetic and
gravitational data causes memory issues when using convolutional neural
networks for (sub-) surface characterization. Recently developed fully
reversible networks can mostly avoid memory limitations by virtue of having a
low and fixed memory requirement for storing network states, as opposed to the
typical linear memory growth with depth. Fully reversible networks enable the
training of deep neural networks that take in entire data volumes, and create
semantic segmentations in one go. This approach avoids the need to work in
small patches or map a data patch to the class of just the central pixel. The
cross-entropy loss function requires small modifications to work in conjunction
with a fully reversible network and learn from sparsely sampled labels without
ever seeing fully labeled ground truth. We show examples from land-use change
detection from hyperspectral time-lapse data, and regional aquifer mapping from
airborne geophysical and geological data.
Related papers
- Fully invertible hyperbolic neural networks for segmenting large-scale surface and sub-surface data [4.1579007112499315]
This paper focuses on a fully invertible network based on the telegraph equation.
We address the explosion of convolutional kernels by combining fully invertible networks with layers that contain the convolutional kernels in a compressed form directly.
Examples in hyperspectral land-use classification, airborne geophysical surveying, and seismic imaging illustrate that we can input large data volumes in one chunk and do not need to work on small patches.
arXiv Detail & Related papers (2024-06-30T05:35:12Z) - Topology-aware Embedding Memory for Continual Learning on Expanding Networks [63.35819388164267]
We present a framework to tackle the memory explosion problem using memory replay techniques.
PDGNNs with Topology-aware Embedding Memory (TEM) significantly outperform state-of-the-art techniques.
arXiv Detail & Related papers (2024-01-24T03:03:17Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Semi-signed neural fitting for surface reconstruction from unoriented
point clouds [53.379712818791894]
We propose SSN-Fitting to reconstruct a better signed distance field.
SSN-Fitting consists of a semi-signed supervision and a loss-based region sampling strategy.
We conduct experiments to demonstrate that SSN-Fitting achieves state-of-the-art performance under different settings.
arXiv Detail & Related papers (2022-06-14T09:40:17Z) - Recurrent neural networks that generalize from examples and optimize by
dreaming [0.0]
We introduce a generalized Hopfield network where pairwise couplings between neurons are built according to Hebb's prescription for on-line learning.
We let the network experience solely a dataset made of a sample of noisy examples for each pattern.
Remarkably, the sleeping mechanisms always significantly reduce the dataset size required to correctly generalize.
arXiv Detail & Related papers (2022-04-17T08:40:54Z) - FuNNscope: Visual microscope for interactively exploring the loss
landscape of fully connected neural networks [77.34726150561087]
We show how to explore high-dimensional landscape characteristics of neural networks.
We generalize observations on small neural networks to more complex systems.
An interactive dashboard opens up a number of possible application networks.
arXiv Detail & Related papers (2022-04-09T16:41:53Z) - Deep Surface Reconstruction from Point Clouds with Visibility
Information [66.05024551590812]
We present two simple ways to augment raw point clouds with visibility information, so it can directly be leveraged by surface reconstruction networks with minimal adaptation.
Our proposed modifications consistently improve the accuracy of generated surfaces as well as the generalization ability of the networks to unseen shape domains.
arXiv Detail & Related papers (2022-02-03T19:33:47Z) - Slope and generalization properties of neural networks [0.0]
We show that the distribution of the slope of a well-trained neural network classifier is generally independent of the width of the layers in a fully connected network.
The slope is of similar size throughout the relevant volume, and varies smoothly. It also behaves as predicted in rescaling examples.
We discuss possible applications of the slope concept, such as using it as a part of the loss function or stopping criterion during network training, or ranking data sets in terms of their complexity.
arXiv Detail & Related papers (2021-07-03T17:54:27Z) - Invertible Residual Network with Regularization for Effective Medical
Image Segmentation [2.76240219662896]
Invertible neural networks have been applied to significantly reduce activation memory footprint when training neural networks with backpropagation.
We propose two versions of the invertible Residual Network, namely Partially Invertible Residual Network (Partially-InvRes) and Fully Invertible Residual Network (Fully-InvRes)
Our results indicate that by using partially/fully invertible networks as the central workhorse in volumetric segmentation, we not only reduce memory overhead but also achieve compatible segmentation performance compared against the non-invertible 3D Unet.
arXiv Detail & Related papers (2021-03-16T13:19:59Z) - Topological obstructions in neural networks learning [67.8848058842671]
We study global properties of the loss gradient function flow.
We use topological data analysis of the loss function and its Morse complex to relate local behavior along gradient trajectories with global properties of the loss surface.
arXiv Detail & Related papers (2020-12-31T18:53:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.