Translating Diffusion, Wavelets, and Regularisation into Residual
Networks
- URL: http://arxiv.org/abs/2002.02753v3
- Date: Sun, 7 Jun 2020 08:51:13 GMT
- Title: Translating Diffusion, Wavelets, and Regularisation into Residual
Networks
- Authors: Tobias Alt, Joachim Weickert, Pascal Peter
- Abstract summary: Convolutional neural networks (CNNs) often perform well, but their stability is poorly understood.
We consider the simple problem of signal denoising, where classical approaches offer provable stability guarantees.
We interpret numerical approximations of these classical methods as a specific residual network architecture.
This leads to a dictionary which allows to translate diffusivities, shrinkage functions, and regularisers into activation functions.
- Score: 15.104201344012347
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Convolutional neural networks (CNNs) often perform well, but their stability
is poorly understood. To address this problem, we consider the simple
prototypical problem of signal denoising, where classical approaches such as
nonlinear diffusion, wavelet-based methods and regularisation offer provable
stability guarantees. To transfer such guarantees to CNNs, we interpret
numerical approximations of these classical methods as a specific residual
network (ResNet) architecture. This leads to a dictionary which allows to
translate diffusivities, shrinkage functions, and regularisers into activation
functions, and enables a direct communication between the four research
communities. On the CNN side, it does not only inspire new families of
nonmonotone activation functions, but also introduces intrinsically stable
architectures for an arbitrary number of layers.
Related papers
- Leveraging Low-Rank and Sparse Recurrent Connectivity for Robust
Closed-Loop Control [63.310780486820796]
We show how a parameterization of recurrent connectivity influences robustness in closed-loop settings.
We find that closed-form continuous-time neural networks (CfCs) with fewer parameters can outperform their full-rank, fully-connected counterparts.
arXiv Detail & Related papers (2023-10-05T21:44:18Z) - Expressive Monotonic Neural Networks [1.0128808054306184]
The monotonic dependence of the outputs of a neural network on some of its inputs is a crucial inductive bias in many scenarios where domain knowledge dictates such behavior.
We propose a weight-constrained architecture with a single residual connection to achieve exact monotonic dependence in any subset of the inputs.
We show how the algorithm is used to train powerful, robust, and interpretable discriminators that achieve competitive performance.
arXiv Detail & Related papers (2023-07-14T17:59:53Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - Interference Cancellation GAN Framework for Dynamic Channels [74.22393885274728]
We introduce an online training framework that can adapt to any changes in the channel.
Our framework significantly outperforms recent neural network models on highly dynamic channels.
arXiv Detail & Related papers (2022-08-17T02:01:18Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - A scalable multi-step least squares method for network identification
with unknown disturbance topology [0.0]
We present an identification method for dynamic networks with known network topology.
We use a multi-step Sequential and Null Space Fitting method to deal with reduced rank noise.
We provide a consistency proof that includes explicit-based Box model structure informativity.
arXiv Detail & Related papers (2021-06-14T16:12:49Z) - Translating Numerical Concepts for PDEs into Neural Architectures [9.460896836770534]
We investigate what can be learned from translating numerical algorithms into neural networks.
On the numerical side, we consider explicit, accelerated explicit, and implicit schemes for a general higher order nonlinear diffusion equation in 1D.
On the neural network side, we identify corresponding concepts in terms of residual networks (ResNets), recurrent networks, and U-nets.
arXiv Detail & Related papers (2021-03-29T08:31:51Z) - F-FADE: Frequency Factorization for Anomaly Detection in Edge Streams [53.70940420595329]
We propose F-FADE, a new approach for detection of anomalies in edge streams.
It uses a novel frequency-factorization technique to efficiently model the time-evolving distributions of frequencies of interactions between node-pairs.
F-FADE is able to handle in an online streaming setting a broad variety of anomalies with temporal and structural changes, while requiring only constant memory.
arXiv Detail & Related papers (2020-11-09T19:55:40Z) - DiffRNN: Differential Verification of Recurrent Neural Networks [3.4423518864863154]
Recurrent neural networks (RNNs) have become popular in a variety of applications such as image processing, data classification, speech recognition, and as controllers in autonomous systems.
We propose DIFFRNN, the first differential verification method for RNNs to certify the equivalence of two structurally similar neural networks.
We demonstrate the practical efficacy of our technique on a variety of benchmarks and show that DIFFRNN outperforms state-of-the-art verification tools such as POPQORN.
arXiv Detail & Related papers (2020-07-20T14:14:35Z) - Stability of Internal States in Recurrent Neural Networks Trained on
Regular Languages [0.0]
We study the stability of neural networks trained to recognize regular languages.
In this saturated regime, analysis of the network activation shows a set of clusters that resemble discrete states in a finite state machine.
We show that transitions between these states in response to input symbols are deterministic and stable.
arXiv Detail & Related papers (2020-06-18T19:50:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.