Efficient Deep Representation Learning by Adaptive Latent Space Sampling
- URL: http://arxiv.org/abs/2004.02757v2
- Date: Sun, 12 Apr 2020 18:25:55 GMT
- Title: Efficient Deep Representation Learning by Adaptive Latent Space Sampling
- Authors: Yuanhan Mo and Shuo Wang and Chengliang Dai and Rui Zhou and Zhongzhao
Teng and Wenjia Bai and Yike Guo
- Abstract summary: Supervised deep learning requires a large amount of training samples with annotations, which are expensive and time-consuming to obtain.
We propose a novel training framework which adaptively selects informative samples that are fed to the training process.
- Score: 16.320898678521843
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised deep learning requires a large amount of training samples with
annotations (e.g. label class for classification task, pixel- or voxel-wised
label map for segmentation tasks), which are expensive and time-consuming to
obtain. During the training of a deep neural network, the annotated samples are
fed into the network in a mini-batch way, where they are often regarded of
equal importance. However, some of the samples may become less informative
during training, as the magnitude of the gradient start to vanish for these
samples. In the meantime, other samples of higher utility or hardness may be
more demanded for the training process to proceed and require more
exploitation. To address the challenges of expensive annotations and loss of
sample informativeness, here we propose a novel training framework which
adaptively selects informative samples that are fed to the training process.
The adaptive selection or sampling is performed based on a hardness-aware
strategy in the latent space constructed by a generative model. To evaluate the
proposed training framework, we perform experiments on three different
datasets, including MNIST and CIFAR-10 for image classification task and a
medical image dataset IVUS for biophysical simulation task. On all three
datasets, the proposed framework outperforms a random sampling method, which
demonstrates the effectiveness of proposed framework.
Related papers
- Integrated Image-Text Based on Semi-supervised Learning for Small Sample Instance Segmentation [1.3157419797035321]
The article proposes a novel small sample instance segmentation solution from the perspective of maximizing the utilization of existing information.
First, it helps the model fully utilize unlabeled data by learning to generate pseudo labels, increasing the number of available samples.
Second, by integrating the features of text and image, more accurate classification results can be obtained.
arXiv Detail & Related papers (2024-10-21T14:44:08Z) - Detection of Under-represented Samples Using Dynamic Batch Training for Brain Tumor Segmentation from MR Images [0.8437187555622164]
Brain tumors in magnetic resonance imaging (MR) are difficult, time-consuming, and prone to human error.
These challenges can be resolved by developing automatic brain tumor segmentation methods from MR images.
Various deep-learning models based on the U-Net have been proposed for the task.
These deep-learning models are trained on a dataset of tumor images and then used for segmenting the masks.
arXiv Detail & Related papers (2024-08-21T21:51:47Z) - Dataset Quantization with Active Learning based Adaptive Sampling [11.157462442942775]
We show that maintaining performance is feasible even with uneven sample distributions.
We propose a novel active learning based adaptive sampling strategy to optimize the sample selection.
Our approach outperforms the state-of-the-art dataset compression methods.
arXiv Detail & Related papers (2024-07-09T23:09:18Z) - Robust Noisy Label Learning via Two-Stream Sample Distillation [48.73316242851264]
Noisy label learning aims to learn robust networks under the supervision of noisy labels.
We design a simple yet effective sample selection framework, termed Two-Stream Sample Distillation (TSSD)
This framework can extract more high-quality samples with clean labels to improve the robustness of network training.
arXiv Detail & Related papers (2024-04-16T12:18:08Z) - Data Pruning via Moving-one-Sample-out [61.45441981346064]
We propose a novel data-pruning approach called moving-one-sample-out (MoSo)
MoSo aims to identify and remove the least informative samples from the training set.
Experimental results demonstrate that MoSo effectively mitigates severe performance degradation at high pruning ratios.
arXiv Detail & Related papers (2023-10-23T08:00:03Z) - Importance Sampling for Stochastic Gradient Descent in Deep Neural
Networks [0.0]
Importance sampling for training deep neural networks has been widely studied.
This paper reviews the challenges inherent to this research area.
We propose a metric allowing the assessment of the quality of a given sampling scheme.
arXiv Detail & Related papers (2023-03-29T08:35:11Z) - Adaptive Siamese Tracking with a Compact Latent Network [219.38172719948048]
We present an intuitive viewing to simplify the Siamese-based trackers by converting the tracking task to a classification.
Under this viewing, we perform an in-depth analysis for them through visual simulations and real tracking examples.
We apply it to adjust three classical Siamese-based trackers, namely SiamRPN++, SiamFC, and SiamBAN.
arXiv Detail & Related papers (2023-02-02T08:06:02Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Active Learning for Deep Visual Tracking [51.5063680734122]
Convolutional neural networks (CNNs) have been successfully applied to the single target tracking task in recent years.
In this paper, we propose an active learning method for deep visual tracking, which selects and annotates the unlabeled samples to train the deep CNNs model.
Under the guidance of active learning, the tracker based on the trained deep CNNs model can achieve competitive tracking performance while reducing the labeling cost.
arXiv Detail & Related papers (2021-10-17T11:47:56Z) - Deep Semi-supervised Knowledge Distillation for Overlapping Cervical
Cell Instance Segmentation [54.49894381464853]
We propose to leverage both labeled and unlabeled data for instance segmentation with improved accuracy by knowledge distillation.
We propose a novel Mask-guided Mean Teacher framework with Perturbation-sensitive Sample Mining.
Experiments show that the proposed method improves the performance significantly compared with the supervised method learned from labeled data only.
arXiv Detail & Related papers (2020-07-21T13:27:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.