A CNN-based Feature Space for Semi-supervised Incremental Learning in
Assisted Living Applications
- URL: http://arxiv.org/abs/2011.05734v1
- Date: Wed, 11 Nov 2020 12:31:48 GMT
- Title: A CNN-based Feature Space for Semi-supervised Incremental Learning in
Assisted Living Applications
- Authors: Tobias Scheck, Ana Perez Grassi, Gangolf Hirtz
- Abstract summary: We propose using the feature space that results from the training dataset to automatically label problematic images.
The resulting semi-supervised incremental learning process allows improving the classification accuracy of new instances by 40%.
- Score: 2.1485350418225244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A Convolutional Neural Network (CNN) is sometimes confronted with objects of
changing appearance ( new instances) that exceed its generalization capability.
This requires the CNN to incorporate new knowledge, i.e., to learn
incrementally. In this paper, we are concerned with this problem in the context
of assisted living. We propose using the feature space that results from the
training dataset to automatically label problematic images that could not be
properly recognized by the CNN. The idea is to exploit the extra information in
the feature space for a semi-supervised labeling and to employ problematic
images to improve the CNN's classification model. Among other benefits, the
resulting semi-supervised incremental learning process allows improving the
classification accuracy of new instances by 40% as illustrated by extensive
experiments.
Related papers
- Understanding and Improving CNNs with Complex Structure Tensor: A Biometrics Study [47.03015281370405]
We show that the use of Complex Structure, which contains compact orientation features with certainties, improves identification accuracy compared to using grayscale inputs alone.
This suggests that the upfront use of orientation features in CNNs, a strategy seen in mammalian vision, not only mitigates their limitations but also enhances their explainability and relevance to thin-clients.
arXiv Detail & Related papers (2024-04-24T02:51:13Z) - Transferability of Convolutional Neural Networks in Stationary Learning
Tasks [96.00428692404354]
We introduce a novel framework for efficient training of convolutional neural networks (CNNs) for large-scale spatial problems.
We show that a CNN trained on small windows of such signals achieves a nearly performance on much larger windows without retraining.
Our results show that the CNN is able to tackle problems with many hundreds of agents after being trained with fewer than ten.
arXiv Detail & Related papers (2023-07-21T13:51:45Z) - A novel feature-scrambling approach reveals the capacity of
convolutional neural networks to learn spatial relations [0.0]
Convolutional neural networks (CNNs) are one of the most successful computer vision systems to solve object recognition.
Yet it remains poorly understood how CNNs actually make their decisions, what the nature of their internal representations is, and how their recognition strategies differ from humans.
arXiv Detail & Related papers (2022-12-12T16:40:29Z) - Understanding CNN Fragility When Learning With Imbalanced Data [1.1444576186559485]
Convolutional neural networks (CNNs) have achieved impressive results on imbalanced image data, but they still have difficulty generalizing to minority classes.
We focus on their latent features to demystify CNN decisions on imbalanced data.
We show that important information regarding the ability of a neural network to generalize to minority classes resides in the class top-K CE and FE.
arXiv Detail & Related papers (2022-10-17T22:40:06Z) - Self-supervised Feature Enhancement: Applying Internal Pretext Task to
Supervised Learning [6.508466234920147]
We show that feature transformations within CNNs can also be regarded as supervisory signals to construct the self-supervised task.
Specifically, we first transform the internal feature maps by discarding different channels, and then define an additional internal pretext task to identify the discarded channels.
CNNs are trained to predict the joint labels generated by the combination of self-supervised labels and original labels.
arXiv Detail & Related papers (2021-06-09T08:59:35Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Decoding CNN based Object Classifier Using Visualization [6.666597301197889]
We visualize what type of features are extracted in different convolution layers of CNN.
Visualizing heat map of activation helps us to understand how CNN classifies and localizes different objects in image.
arXiv Detail & Related papers (2020-07-15T05:01:27Z) - RIFLE: Backpropagation in Depth for Deep Transfer Learning through
Re-Initializing the Fully-connected LayEr [60.07531696857743]
Fine-tuning the deep convolution neural network(CNN) using a pre-trained model helps transfer knowledge learned from larger datasets to the target task.
We propose RIFLE - a strategy that deepens backpropagation in transfer learning settings.
RIFLE brings meaningful updates to the weights of deep CNN layers and improves low-level feature learning.
arXiv Detail & Related papers (2020-07-07T11:27:43Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.