Grad-Instructor: Universal Backpropagation with Explainable Evaluation Neural Networks for Meta-learning and AutoML
- URL: http://arxiv.org/abs/2406.10559v1
- Date: Sat, 15 Jun 2024 08:37:51 GMT
- Title: Grad-Instructor: Universal Backpropagation with Explainable Evaluation Neural Networks for Meta-learning and AutoML
- Authors: Ryohei Ino,
- Abstract summary: An Evaluation Neural Network (ENN) is trained via deep reinforcement learning to predict the performance of the target network.
The ENN then works as an additional evaluation function during backpropagation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a novel method for autonomously enhancing deep neural network training. My approach employs an Evaluation Neural Network (ENN) trained via deep reinforcement learning to predict the performance of the target network. The ENN then works as an additional evaluation function during backpropagation. Computational experiments with Multi-Layer Perceptrons (MLPs) demonstrate the method's effectiveness. By processing input data at 0.15^2 times its original resolution, the ENNs facilitated efficient inference. Results indicate that MLPs trained with the proposed method achieved a mean test accuracy of 93.02%, which is 2.8% higher than those trained solely with conventional backpropagation or with L1 regularization. The proposed method's test accuracy is comparable to networks initialized with He initialization while reducing the difference between test and training errors. These improvements are achieved without increasing the number of epochs, thus avoiding the risk of overfitting. Additionally, the proposed method dynamically adjusts gradient magnitudes according to the training stage. The optimal ENN for enhancing MLPs can be predicted, reducing the time spent exploring optimal training methodologies. The explainability of ENNs is also analyzed using Grad-CAM, demonstrating their ability to visualize evaluation bases and supporting the Strong Lottery Ticket hypothesis.
Related papers
- A Contrastive Symmetric Forward-Forward Algorithm (SFFA) for Continual Learning Tasks [7.345136916791223]
Forward-Forward Algorithm (FFA) has recently gained momentum as an alternative to the conventional back-propagation algorithm for neural network learning.
This work proposes the Symmetric Forward-Forward Algorithm (SFFA), a novel modification of the original FFA which partitions each layer into positive and negative neurons.
arXiv Detail & Related papers (2024-09-11T16:21:44Z) - Enhancing Deep Neural Network Training Efficiency and Performance through Linear Prediction [0.0]
Deep neural networks (DNN) have achieved remarkable success in various fields, including computer vision and natural language processing.
This paper aims to propose a method to optimize the training effectiveness of DNN, with the goal of improving model performance.
arXiv Detail & Related papers (2023-10-17T03:11:30Z) - KAKURENBO: Adaptively Hiding Samples in Deep Neural Network Training [2.8804804517897935]
We propose a method for hiding the least-important samples during the training of deep neural networks.
We adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process.
Our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline.
arXiv Detail & Related papers (2023-10-16T06:19:29Z) - Improving Music Performance Assessment with Contrastive Learning [78.8942067357231]
This study investigates contrastive learning as a potential method to improve existing MPA systems.
We introduce a weighted contrastive loss suitable for regression tasks applied to a convolutional neural network.
Our results show that contrastive-based methods are able to match and exceed SoTA performance for MPA regression tasks.
arXiv Detail & Related papers (2021-08-03T19:24:25Z) - Gradient-Free Neural Network Training via Synaptic-Level Reinforcement
Learning [0.0]
It is widely believed that there is a consistent, synaptic-level learning mechanism in specific brain regions that actualizes learning.
Here we propose an algorithm based on reinforcement learning to generate and apply a simple synaptic-level learning policy.
The robustness and lack of reliance on gradient opens the door for new techniques for training difficult-to-differentiate neural networks.
arXiv Detail & Related papers (2021-05-29T22:26:18Z) - A Simple Fine-tuning Is All You Need: Towards Robust Deep Learning Via
Adversarial Fine-tuning [90.44219200633286]
We propose a simple yet very effective adversarial fine-tuning approach based on a $textitslow start, fast decay$ learning rate scheduling strategy.
Experimental results show that the proposed adversarial fine-tuning approach outperforms the state-of-the-art methods on CIFAR-10, CIFAR-100 and ImageNet datasets.
arXiv Detail & Related papers (2020-12-25T20:50:15Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - Cross Learning in Deep Q-Networks [82.20059754270302]
We propose a novel cross Q-learning algorithm, aim at alleviating the well-known overestimation problem in value-based reinforcement learning methods.
Our algorithm builds on double Q-learning, by maintaining a set of parallel models and estimate the Q-value based on a randomly selected network.
arXiv Detail & Related papers (2020-09-29T04:58:17Z) - RIFLE: Backpropagation in Depth for Deep Transfer Learning through
Re-Initializing the Fully-connected LayEr [60.07531696857743]
Fine-tuning the deep convolution neural network(CNN) using a pre-trained model helps transfer knowledge learned from larger datasets to the target task.
We propose RIFLE - a strategy that deepens backpropagation in transfer learning settings.
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning.
arXiv Detail & Related papers (2020-07-07T11:27:43Z) - TRP: Trained Rank Pruning for Efficient Deep Neural Networks [69.06699632822514]
We propose Trained Rank Pruning (TRP), which alternates between low rank approximation and training.
A nuclear regularization optimized by sub-gradient descent is utilized to further promote low rank in TRP.
The TRP trained network inherently has a low-rank structure, and is approximated with negligible performance loss.
arXiv Detail & Related papers (2020-04-30T03:37:36Z) - A Hybrid Method for Training Convolutional Neural Networks [3.172761915061083]
We propose a hybrid method that uses both backpropagation and evolutionary strategies to train Convolutional Neural Networks.
We show that the proposed hybrid method is capable of improving upon regular training in the task of image classification.
arXiv Detail & Related papers (2020-04-15T17:52:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.