A Robust Backpropagation-Free Framework for Images
- URL: http://arxiv.org/abs/2206.01820v2
- Date: Sun, 5 Nov 2023 18:59:55 GMT
- Title: A Robust Backpropagation-Free Framework for Images
- Authors: Timothy Zee, Alexander G. Ororbia, Ankur Mali, Ifeoma Nwogu
- Abstract summary: We present an error kernel driven activation alignment algorithm for image data.
EKDAA accomplishes through the introduction of locally derived error transmission kernels and error maps.
Results are presented for an EKDAA trained CNN that employs a non-differentiable activation function.
- Score: 47.97322346441165
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While current deep learning algorithms have been successful for a wide
variety of artificial intelligence (AI) tasks, including those involving
structured image data, they present deep neurophysiological conceptual issues
due to their reliance on the gradients that are computed by backpropagation of
errors (backprop). Gradients are required to obtain synaptic weight adjustments
but require knowledge of feed-forward activities in order to conduct backward
propagation, a biologically implausible process. This is known as the "weight
transport problem". Therefore, in this work, we present a more biologically
plausible approach towards solving the weight transport problem for image data.
This approach, which we name the error kernel driven activation alignment
(EKDAA) algorithm, accomplishes through the introduction of locally derived
error transmission kernels and error maps. Like standard deep learning
networks, EKDAA performs the standard forward process via weights and
activation functions; however, its backward error computation involves adaptive
error kernels that propagate local error signals through the network. The
efficacy of EKDAA is demonstrated by performing visual-recognition tasks on the
Fashion MNIST, CIFAR-10 and SVHN benchmarks, along with demonstrating its
ability to extract visual features from natural color images. Furthermore, in
order to demonstrate its non-reliance on gradient computations, results are
presented for an EKDAA trained CNN that employs a non-differentiable activation
function.
Related papers
- DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - ENN: A Neural Network with DCT Adaptive Activation Functions [2.2713084727838115]
We present Expressive Neural Network (ENN), a novel model in which the non-linear activation functions are modeled using the Discrete Cosine Transform (DCT)
This parametrization keeps the number of trainable parameters low, is appropriate for gradient-based schemes, and adapts to different learning tasks.
The performance of ENN outperforms state of the art benchmarks, providing above a 40% gap in accuracy in some scenarios.
arXiv Detail & Related papers (2023-07-02T21:46:30Z) - ASU-CNN: An Efficient Deep Architecture for Image Classification and
Feature Visualizations [0.0]
Activation functions play a decisive role in determining the capacity of Deep Neural Networks.
In this paper, a Convolutional Neural Network model named as ASU-CNN is proposed.
The network achieved promising results on both training and testing data for the classification of CIFAR-10.
arXiv Detail & Related papers (2023-05-28T16:52:25Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - A simple normative network approximates local non-Hebbian learning in
the cortex [12.940770779756482]
Neuroscience experiments demonstrate that the processing of sensory inputs by cortical neurons is modulated by instructive signals.
Here, adopting a normative approach, we model these instructive signals as supervisory inputs guiding the projection of the feedforward data.
Online algorithms can be implemented by neural networks whose synaptic learning rules resemble calcium plateau potential dependent plasticity observed in the cortex.
arXiv Detail & Related papers (2020-10-23T20:49:44Z) - Self-Organized Operational Neural Networks for Severe Image Restoration
Problems [25.838282412957675]
Discnative learning based on convolutional neural networks (CNNs) aims to perform image restoration by learning from training examples of noisy-clean image pairs.
We claim that this is due to the inherent linear nature of convolution-based transformation, which is inadequate for handling severe restoration problems.
We propose a self-organizing variant of ONNs, Self-ONNs, for image restoration, which synthesizes novel nodal transformations onthe-fly.
arXiv Detail & Related papers (2020-08-29T02:19:41Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.