Forward-Forward Algorithm for Hyperspectral Image Classification: A
Preliminary Study
- URL: http://arxiv.org/abs/2307.00231v1
- Date: Sat, 1 Jul 2023 05:39:28 GMT
- Title: Forward-Forward Algorithm for Hyperspectral Image Classification: A
Preliminary Study
- Authors: Sidike Paheding and Abel A. Reyes-Angulo
- Abstract summary: Forward-forward algorithm (FFA) computes local goodness functions to optimize network parameters.
This study investigates the application of FFA for hyperspectral image classification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The back-propagation algorithm has long been the de-facto standard in
optimizing weights and biases in neural networks, particularly in cutting-edge
deep learning models. Its widespread adoption in fields like natural language
processing, computer vision, and remote sensing has revolutionized automation
in various tasks. The popularity of back-propagation stems from its ability to
achieve outstanding performance in tasks such as classification, detection, and
segmentation. Nevertheless, back-propagation is not without its limitations,
encompassing sensitivity to initial conditions, vanishing gradients,
overfitting, and computational complexity. The recent introduction of a
forward-forward algorithm (FFA), which computes local goodness functions to
optimize network parameters, alleviates the dependence on substantial
computational resources and the constant need for architectural scaling. This
study investigates the application of FFA for hyperspectral image
classification. Experimental results and comparative analysis are provided with
the use of the traditional back-propagation algorithm. Preliminary results show
the potential behind FFA and its promises.
Related papers
- Optimal feature rescaling in machine learning based on neural networks [0.0]
An optimal rescaling of input features (OFR) is carried out by a Genetic Algorithm (GA)
The OFR reshapes the input space improving the conditioning of the gradient-based algorithm used for the training.
The approach has been tested on a FFNN modeling the outcome of a real industrial process.
arXiv Detail & Related papers (2024-02-13T21:57:31Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - Towards Theoretically Inspired Neural Initialization Optimization [66.04735385415427]
We propose a differentiable quantity, named GradCosine, with theoretical insights to evaluate the initial state of a neural network.
We show that both the training and test performance of a network can be improved by maximizing GradCosine under norm constraint.
Generalized from the sample-wise analysis into the real batch setting, NIO is able to automatically look for a better initialization with negligible cost.
arXiv Detail & Related papers (2022-10-12T06:49:16Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Analytically Tractable Inference in Deep Neural Networks [0.0]
Tractable Approximate Inference (TAGI) algorithm was shown to be a viable and scalable alternative to backpropagation for shallow fully-connected neural networks.
We are demonstrating how TAGI matches or exceeds the performance of backpropagation, for training classic deep neural network architectures.
arXiv Detail & Related papers (2021-03-09T14:51:34Z) - Phase Retrieval using Expectation Consistent Signal Recovery Algorithm
based on Hypernetwork [73.94896986868146]
Phase retrieval is an important component in modern computational imaging systems.
Recent advances in deep learning have opened up a new possibility for robust and fast PR.
We develop a novel framework for deep unfolding to overcome the existing limitations.
arXiv Detail & Related papers (2021-01-12T08:36:23Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.