Active Deep Densely Connected Convolutional Network for Hyperspectral
Image Classification
- URL: http://arxiv.org/abs/2009.00320v2
- Date: Tue, 24 Nov 2020 11:46:16 GMT
- Title: Active Deep Densely Connected Convolutional Network for Hyperspectral
Image Classification
- Authors: Bing Liu, Anzhu Yu, Pengqiang Zhang, Lei Ding, Wenyue Guo, Kuiliang
Gao, Xibing Zuo
- Abstract summary: It is still very challenging to use only a few labeled samples to train deep learning models to reach a high classification accuracy.
An active deep-learning framework trained by an end-to-end manner is, therefore, proposed by this paper in order to minimize the hyperspectral image classification costs.
- Score: 6.850575514129793
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based methods have seen a massive rise in popularity for
hyperspectral image classification over the past few years. However, the
success of deep learning is attributed greatly to numerous labeled samples. It
is still very challenging to use only a few labeled samples to train deep
learning models to reach a high classification accuracy. An active
deep-learning framework trained by an end-to-end manner is, therefore, proposed
by this paper in order to minimize the hyperspectral image classification
costs. First, a deep densely connected convolutional network is considered for
hyperspectral image classification. Different from the traditional active
learning methods, an additional network is added to the designed deep densely
connected convolutional network to predict the loss of input samples. Then, the
additional network could be used to suggest unlabeled samples that the deep
densely connected convolutional network is more likely to produce a wrong
label. Note that the additional network uses the intermediate features of the
deep densely connected convolutional network as input. Therefore, the proposed
method is an end-to-end framework. Subsequently, a few of the selected samples
are labelled manually and added to the training samples. The deep densely
connected convolutional network is therefore trained using the new training
set. Finally, the steps above are repeated to train the whole framework
iteratively. Extensive experiments illustrates that the method proposed could
reach a high accuracy in classification after selecting just a few samples.
Related papers
- Pushing the Envelope for Depth-Based Semi-Supervised 3D Hand Pose
Estimation with Consistency Training [2.6954666679827137]
We propose a semi-supervised method to significantly reduce the dependence on labeled training data.
The proposed method consists of two identical networks trained jointly: a teacher network and a student network.
Experiments demonstrate that the proposed method outperforms the state-of-the-art semi-supervised methods by large margins.
arXiv Detail & Related papers (2023-03-27T12:32:49Z) - Deep Residual Compensation Convolutional Network without Backpropagation [0.0]
We introduce a residual compensation convolutional network, which is the first PCANet-like network trained with hundreds of layers.
To correct the classification errors, we train each layer with new labels derived from the residual information of all its preceding layers.
Our experiments show that our deep network outperforms all existing PCANet-like networks and is competitive with several traditional gradient-based models.
arXiv Detail & Related papers (2023-01-27T11:45:09Z) - Layer Ensembles [95.42181254494287]
We introduce a method for uncertainty estimation that considers a set of independent categorical distributions for each layer of the network.
We show that the method can be further improved by ranking samples, resulting in models that require less memory and time to run.
arXiv Detail & Related papers (2022-10-10T17:52:47Z) - Neural Maximum A Posteriori Estimation on Unpaired Data for Motion
Deblurring [87.97330195531029]
We propose a Neural Maximum A Posteriori (NeurMAP) estimation framework for training neural networks to recover blind motion information and sharp content from unpaired data.
The proposed NeurMAP is an approach to existing deblurring neural networks, and is the first framework that enables training image deblurring networks on unpaired datasets.
arXiv Detail & Related papers (2022-04-26T08:09:47Z) - All at Once Network Quantization via Collaborative Knowledge Transfer [56.95849086170461]
We develop a novel collaborative knowledge transfer approach for efficiently training the all-at-once quantization network.
Specifically, we propose an adaptive selection strategy to choose a high-precision enquoteteacher for transferring knowledge to the low-precision student.
To effectively transfer knowledge, we develop a dynamic block swapping method by randomly replacing the blocks in the lower-precision student network with the corresponding blocks in the higher-precision teacher network.
arXiv Detail & Related papers (2021-03-02T03:09:03Z) - Semi-supervised deep learning based on label propagation in a 2D
embedded space [117.9296191012968]
Proposed solutions propagate labels from a small set of supervised images to a large set of unsupervised ones to train a deep neural network model.
We present a loop in which a deep neural network (VGG-16) is trained from a set with more correctly labeled samples along iterations.
As the labeled set improves along iterations, it improves the features of the neural network.
arXiv Detail & Related papers (2020-08-02T20:08:54Z) - ESPN: Extremely Sparse Pruned Networks [50.436905934791035]
We show that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks.
Our algorithm represents a hybrid approach between single shot network pruning methods and Lottery-Ticket type approaches.
arXiv Detail & Related papers (2020-06-28T23:09:27Z) - Digit Image Recognition Using an Ensemble of One-Versus-All Deep Network
Classifiers [2.385916960125935]
We implement a novel technique for the case of digit image recognition and test and evaluate it on the same.
Every network in an ensemble has been trained by an OVA training technique using the Gradient Descent with Momentum (SGDMA)
Our proposed technique outperforms the baseline on digit image recognition for all datasets.
arXiv Detail & Related papers (2020-06-28T15:37:39Z) - ReMarNet: Conjoint Relation and Margin Learning for Small-Sample Image
Classification [49.87503122462432]
We introduce a novel neural network termed Relation-and-Margin learning Network (ReMarNet)
Our method assembles two networks of different backbones so as to learn the features that can perform excellently in both of the aforementioned two classification mechanisms.
Experiments on four image datasets demonstrate that our approach is effective in learning discriminative features from a small set of labeled samples.
arXiv Detail & Related papers (2020-06-27T13:50:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.