Deep Features for CBIR with Scarce Data using Hebbian Learning
- URL: http://arxiv.org/abs/2205.08935v1
- Date: Wed, 18 May 2022 14:00:54 GMT
- Title: Deep Features for CBIR with Scarce Data using Hebbian Learning
- Authors: Gabriele Lagani, Davide Bacciu, Claudio Gallicchio, Fabrizio Falchi,
Claudio Gennaro, Giuseppe Amato
- Abstract summary: We study the performance of biologically inspired textitHebbian learning algorithms in the development of feature extractors for Content Based Image Retrieval (CBIR) tasks.
Specifically, we consider a semi-supervised learning strategy in two steps: first, an unsupervised pre-training stage; second, the network is fine-tuned on the image dataset.
- Score: 17.57322804741561
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Features extracted from Deep Neural Networks (DNNs) have proven to be very
effective in the context of Content Based Image Retrieval (CBIR). In recent
work, biologically inspired \textit{Hebbian} learning algorithms have shown
promises for DNN training. In this contribution, we study the performance of
such algorithms in the development of feature extractors for CBIR tasks.
Specifically, we consider a semi-supervised learning strategy in two steps:
first, an unsupervised pre-training stage is performed using Hebbian learning
on the image dataset; second, the network is fine-tuned using supervised
Stochastic Gradient Descent (SGD) training. For the unsupervised pre-training
stage, we explore the nonlinear Hebbian Principal Component Analysis (HPCA)
learning rule. For the supervised fine-tuning stage, we assume sample
efficiency scenarios, in which the amount of labeled samples is just a small
fraction of the whole dataset. Our experimental analysis, conducted on the
CIFAR10 and CIFAR100 datasets shows that, when few labeled samples are
available, our Hebbian approach provides relevant improvements compared to
various alternative methods.
Related papers
- Spanning Training Progress: Temporal Dual-Depth Scoring (TDDS) for Enhanced Dataset Pruning [50.809769498312434]
We propose a novel dataset pruning method termed as Temporal Dual-Depth Scoring (TDDS)
Our method achieves 54.51% accuracy with only 10% training data, surpassing random selection by 7.83% and other comparison methods by at least 12.69%.
arXiv Detail & Related papers (2023-11-22T03:45:30Z) - Learning Deep Representations via Contrastive Learning for Instance
Retrieval [11.736450745549792]
This paper makes the first attempt that tackles the problem using instance-discrimination based contrastive learning (CL)
In this work, we approach this problem by exploring the capability of deriving discriminative representations from pre-trained and fine-tuned CL models.
arXiv Detail & Related papers (2022-09-28T04:36:34Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Trash to Treasure: Harvesting OOD Data with Cross-Modal Matching for
Open-Set Semi-Supervised Learning [101.28281124670647]
Open-set semi-supervised learning (open-set SSL) investigates a challenging but practical scenario where out-of-distribution (OOD) samples are contained in the unlabeled data.
We propose a novel training mechanism that could effectively exploit the presence of OOD data for enhanced feature learning.
Our approach substantially lifts the performance on open-set SSL and outperforms the state-of-the-art by a large margin.
arXiv Detail & Related papers (2021-08-12T09:14:44Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - Hebbian Semi-Supervised Learning in a Sample Efficiency Setting [10.026753669198108]
We propose a semisupervised training strategy for Deep Convolutional Neural Networks (DCNN)
All internal layers (both convolutional and fully connected) are pre-trained using an unsupervised approach based on Hebbian learning.
arXiv Detail & Related papers (2021-03-16T11:57:52Z) - Training Convolutional Neural Networks With Hebbian Principal Component
Analysis [10.026753669198108]
Hebbian learning can be used for training the lower or the higher layers of a neural network.
We use a nonlinear Hebbian Principal Component Analysis ( HPCA) learning rule, in place of the Hebbian Winner Takes All (HWTA) strategy.
In particular, the HPCA rule is used to train Convolutional Neural Networks in order to extract relevant features from the CIFAR-10 image dataset.
arXiv Detail & Related papers (2020-12-22T18:17:46Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z) - A Deep Unsupervised Feature Learning Spiking Neural Network with
Binarized Classification Layers for EMNIST Classification using SpykeFlow [0.0]
unsupervised learning technique of spike timing dependent plasticity (STDP) using binary activations are used to extract features from spiking input data.
The accuracies obtained for the balanced EMNIST data set compare favorably with other approaches.
arXiv Detail & Related papers (2020-02-26T23:47:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.