Semi-supervised deep learning based on label propagation in a 2D
embedded space
- URL: http://arxiv.org/abs/2008.00558v2
- Date: Fri, 15 Jan 2021 14:30:27 GMT
- Title: Semi-supervised deep learning based on label propagation in a 2D
embedded space
- Authors: Barbara Caroline Benato and Jancarlo Ferreira Gomes and Alexandru
Cristian Telea and Alexandre Xavier Falc\~ao
- Abstract summary: Proposed solutions propagate labels from a small set of supervised images to a large set of unsupervised ones to train a deep neural network model.
We present a loop in which a deep neural network (VGG-16) is trained from a set with more correctly labeled samples along iterations.
As the labeled set improves along iterations, it improves the features of the neural network.
- Score: 117.9296191012968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While convolutional neural networks need large labeled sets for training
images, expert human supervision of such datasets can be very laborious.
Proposed solutions propagate labels from a small set of supervised images to a
large set of unsupervised ones to obtain sufficient truly-and-artificially
labeled samples to train a deep neural network model. Yet, such solutions need
many supervised images for validation. We present a loop in which a deep neural
network (VGG-16) is trained from a set with more correctly labeled samples
along iterations, created by using t-SNE to project the features of its last
max-pooling layer into a 2D embedded space in which labels are propagated using
the Optimum-Path Forest semi-supervised classifier. As the labeled set improves
along iterations, it improves the features of the neural network. We show that
this can significantly improve classification results on test data (using only
1\% to 5\% of supervised samples) of three private challenging datasets and two
public ones.
Related papers
- Bayesian Self-Training for Semi-Supervised 3D Segmentation [59.544558398992386]
3D segmentation is a core problem in computer vision.
densely labeling 3D point clouds to employ fully-supervised training remains too labor intensive and expensive.
Semi-supervised training provides a more practical alternative, where only a small set of labeled data is given, accompanied by a larger unlabeled set.
arXiv Detail & Related papers (2024-09-12T14:54:31Z) - Towards Label-free Scene Understanding by Vision Foundation Models [87.13117617056004]
We investigate the potential of vision foundation models in enabling networks to comprehend 2D and 3D worlds without labelled data.
We propose a novel Cross-modality Noisy Supervision (CNS) method that leverages the strengths of CLIP and SAM to supervise 2D and 3D networks simultaneously.
Our 2D and 3D network achieves label-free semantic segmentation with 28.4% and 33.5% mIoU on ScanNet, improving 4.7% and 7.9%, respectively.
arXiv Detail & Related papers (2023-06-06T17:57:49Z) - W2N:Switching From Weak Supervision to Noisy Supervision for Object
Detection [64.10643170523414]
We propose a novel WSOD framework with a new paradigm that switches from weak supervision to noisy supervision (W2N)
In the localization adaptation module, we propose a regularization loss to reduce the proportion of discriminative parts in original pseudo ground-truths.
Our W2N outperforms all existing pure WSOD methods and transfer learning methods.
arXiv Detail & Related papers (2022-07-25T12:13:48Z) - Iterative Pseudo-Labeling with Deep Feature Annotation and
Confidence-Based Sampling [127.46527972920383]
Training deep neural networks is challenging when large and annotated datasets are unavailable.
We improve a recent iterative pseudo-labeling technique, Deep Feature, by selecting the most confident unsupervised samples to iteratively train a deep neural network.
We first ascertain the best configuration for the baseline -- a self-trained deep neural network -- and then evaluate our confidence DeepFA for different confidence thresholds.
arXiv Detail & Related papers (2021-09-06T20:02:13Z) - Semi-weakly Supervised Contrastive Representation Learning for Retinal
Fundus Images [0.2538209532048867]
We propose a semi-weakly supervised contrastive learning framework for representation learning using semi-weakly annotated images.
We empirically validate the transfer learning performance of SWCL on seven public retinal fundus datasets.
arXiv Detail & Related papers (2021-08-04T15:50:09Z) - Anomaly Detection in Image Datasets Using Convolutional Neural Networks,
Center Loss, and Mahalanobis Distance [0.0]
User activities generate a significant number of poor-quality or irrelevant images and data vectors.
For neural networks, the anomalous is usually defined as out-of-distribution samples.
This work proposes methods for supervised and semi-supervised detection of out-of-distribution samples in image datasets.
arXiv Detail & Related papers (2021-04-13T13:44:03Z) - Learning Neural Network Subspaces [74.44457651546728]
Recent observations have advanced our understanding of the neural network optimization landscape.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
With a similar computational cost as training one model, we learn lines, curves, and simplexes of high-accuracy neural networks.
arXiv Detail & Related papers (2021-02-20T23:26:58Z) - A generic ensemble based deep convolutional neural network for
semi-supervised medical image segmentation [7.141405427125369]
We propose a generic semi-supervised learning framework for image segmentation based on a deep convolutional neural network (DCNN)
Our method is able to significantly improve beyond fully supervised model learning by incorporating unlabeled data.
arXiv Detail & Related papers (2020-04-16T23:41:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.