Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning
- URL: http://arxiv.org/abs/2001.01121v1
- Date: Sat, 4 Jan 2020 20:07:36 GMT
- Title: Biologically-Motivated Deep Learning Method using Hierarchical
Competitive Learning
- Authors: Takashi Shinozaki
- Abstract summary: I propose to introduce unsupervised competitive learning which only requires forward propagating signals as a pre-training method for CNNs.
The proposed method could be useful for a variety of poorly labeled data, for example, time series or medical data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study proposes a novel biologically-motivated learning method for deep
convolutional neural networks (CNNs). The combination of CNNs and back
propagation (BP) learning is the most powerful method in recent machine
learning regimes. However, it requires large labeled data for training, and
this requirement can occasionally become a barrier for real world applications.
To address this problem and utilize unlabeled data, I propose to introduce
unsupervised competitive learning which only requires forward propagating
signals as a pre-training method for CNNs. The method was evaluated by image
discrimination tasks using MNIST, CIFAR-10, and ImageNet datasets, and it
achieved a state-of-the-art performance as a biologically-motivated method in
the ImageNet experiment. The results suggested that the method enables
higher-level learning representations solely from forward propagating signals
without a backward error signal for the learning of convolutional layers. The
proposed method could be useful for a variety of poorly labeled data, for
example, time series or medical data.
Related papers
- Gradient-Free Supervised Learning using Spike-Timing-Dependent Plasticity for Image Recognition [3.087000217989688]
An approach to supervised learning in spiking neural networks is presented using a gradient-free method combined with spike-timing-dependent plasticity for image recognition.
The proposed network architecture is scalable to multiple layers, enabling the development of more complex and deeper SNN models.
arXiv Detail & Related papers (2024-10-21T21:32:17Z) - Data Efficient Contrastive Learning in Histopathology using Active Sampling [0.0]
Deep learning algorithms can provide robust quantitative analysis in digital pathology.
These algorithms require large amounts of annotated training data.
Self-supervised methods have been proposed to learn features using ad-hoc pretext tasks.
We propose a new method for actively sampling informative members from the training set using a small proxy network.
arXiv Detail & Related papers (2023-03-28T18:51:22Z) - Learning from Data with Noisy Labels Using Temporal Self-Ensemble [11.245833546360386]
Deep neural networks (DNNs) have an enormous capacity to memorize noisy labels.
Current state-of-the-art methods present a co-training scheme that trains dual networks using samples associated with small losses.
We propose a simple yet effective robust training scheme that operates by training only a single network.
arXiv Detail & Related papers (2022-07-21T08:16:31Z) - Adaptive Convolutional Dictionary Network for CT Metal Artifact
Reduction [62.691996239590125]
We propose an adaptive convolutional dictionary network (ACDNet) for metal artifact reduction.
Our ACDNet can automatically learn the prior for artifact-free CT images via training data and adaptively adjust the representation kernels for each input CT image.
Our method inherits the clear interpretability of model-based methods and maintains the powerful representation ability of learning-based methods.
arXiv Detail & Related papers (2022-05-16T06:49:36Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - FF-NSL: Feed-Forward Neural-Symbolic Learner [70.978007919101]
This paper introduces a neural-symbolic learning framework, called Feed-Forward Neural-Symbolic Learner (FF-NSL)
FF-NSL integrates state-of-the-art ILP systems based on the Answer Set semantics, with neural networks, in order to learn interpretable hypotheses from labelled unstructured data.
arXiv Detail & Related papers (2021-06-24T15:38:34Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Knowledge Distillation By Sparse Representation Matching [107.87219371697063]
We propose Sparse Representation Matching (SRM) to transfer intermediate knowledge from one Convolutional Network (CNN) to another by utilizing sparse representation.
We formulate as a neural processing block, which can be efficiently optimized using gradient descent and integrated into any CNN in a plug-and-play manner.
Our experiments demonstrate that is robust to architectural differences between the teacher and student networks, and outperforms other KD techniques across several datasets.
arXiv Detail & Related papers (2021-03-31T11:47:47Z) - STDP enhances learning by backpropagation in a spiking neural network [0.0]
The proposed method improves the accuracy without additional labeling when a small amount of labeled data is used.
It is possible to implement the proposed learning method for event-driven systems.
arXiv Detail & Related papers (2021-02-21T06:55:02Z) - An Online Learning Algorithm for a Neuro-Fuzzy Classifier with
Mixed-Attribute Data [9.061408029414455]
General fuzzy min-max neural network (GFMMNN) is one of the efficient neuro-fuzzy systems for data classification.
This paper proposes an extended online learning algorithm for the GFMMNN.
The proposed method can handle the datasets with both continuous and categorical features.
arXiv Detail & Related papers (2020-09-30T13:45:36Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.