DDIPNet and DDIPNet+: Discriminant Deep Image Prior Networks for Remote
Sensing Image Classification
- URL: http://arxiv.org/abs/2212.10411v1
- Date: Tue, 20 Dec 2022 16:39:04 GMT
- Title: DDIPNet and DDIPNet+: Discriminant Deep Image Prior Networks for Remote
Sensing Image Classification
- Authors: Daniel F. S. Santos, Rafael G. Pires, Leandro A. Passos, and Jo\~ao P.
Papa
- Abstract summary: Research on remote sensing image classification significantly impacts essential human routine tasks such as urban planning and agriculture.
The current paper proposes two novel deep learning-based architectures for image classification purposes, i.e., the Discriminant Deep Image Prior Network and the Discriminant Deep Image Prior Network+.
Experiments conducted over three well-known public remote sensing image datasets achieved state-of-the-art results, evidencing the effectiveness of using deep image priors for remote sensing image classification.
- Score: 0.39146761527401425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research on remote sensing image classification significantly impacts
essential human routine tasks such as urban planning and agriculture. Nowadays,
the rapid advance in technology and the availability of many high-quality
remote sensing images create a demand for reliable automation methods. The
current paper proposes two novel deep learning-based architectures for image
classification purposes, i.e., the Discriminant Deep Image Prior Network and
the Discriminant Deep Image Prior Network+, which combine Deep Image Prior and
Triplet Networks learning strategies. Experiments conducted over three
well-known public remote sensing image datasets achieved state-of-the-art
results, evidencing the effectiveness of using deep image priors for remote
sensing image classification.
Related papers
- Generic Knowledge Boosted Pre-training For Remote Sensing Images [46.071496675604884]
Generic Knowledge Boosted Remote Sensing Pre-training (GeRSP) is a novel remote sensing pre-training framework.
GeRSP learns robust representations from remote sensing and natural images for remote sensing understanding tasks.
We show that GeRSP can effectively learn robust representations in a unified manner, improving the performance of remote sensing downstream tasks.
arXiv Detail & Related papers (2024-01-09T15:36:07Z) - Paint and Distill: Boosting 3D Object Detection with Semantic Passing
Network [70.53093934205057]
3D object detection task from lidar or camera sensors is essential for autonomous driving.
We propose a novel semantic passing framework, named SPNet, to boost the performance of existing lidar-based 3D detection models.
arXiv Detail & Related papers (2022-07-12T12:35:34Z) - An Empirical Study of Remote Sensing Pretraining [117.90699699469639]
We conduct an empirical study of remote sensing pretraining (RSP) on aerial images.
RSP can help deliver distinctive performances in scene recognition tasks.
RSP mitigates the data discrepancies of traditional ImageNet pretraining on RS images, but it may still suffer from task discrepancies.
arXiv Detail & Related papers (2022-04-06T13:38:11Z) - Learning Efficient Representations for Enhanced Object Detection on
Large-scene SAR Images [16.602738933183865]
It is a challenging problem to detect and recognize targets on complex large-scene Synthetic Aperture Radar (SAR) images.
Recently developed deep learning algorithms can automatically learn the intrinsic features of SAR images.
We propose an efficient and robust deep learning based target detection method.
arXiv Detail & Related papers (2022-01-22T03:25:24Z) - Geographical Knowledge-driven Representation Learning for Remote Sensing
Images [18.79154074365997]
We propose a Geographical Knowledge-driven Representation learning method for remote sensing images (GeoKR)
The global land cover products and geographical location associated with each remote sensing image are regarded as geographical knowledge.
A large scale pre-training dataset Levir-KR is proposed to support network pre-training.
arXiv Detail & Related papers (2021-07-12T09:23:15Z) - Deep Artifact-Free Residual Network for Single Image Super-Resolution [0.2399911126932526]
We propose Deep Artifact-Free Residual (DAFR) network which uses the merits of both residual learning and usage of ground-truth image as target.
Our framework uses a deep model to extract the high-frequency information which is necessary for high-quality image reconstruction.
Our experimental results show that the proposed method achieves better quantitative and qualitative image quality compared to the existing methods.
arXiv Detail & Related papers (2020-09-25T20:53:55Z) - NAS-DIP: Learning Deep Image Prior with Neural Architecture Search [65.79109790446257]
Recent work has shown that the structure of deep convolutional neural networks can be used as a structured image prior.
We propose to search for neural architectures that capture stronger image priors.
We search for an improved network by leveraging an existing neural architecture search algorithm.
arXiv Detail & Related papers (2020-08-26T17:59:36Z) - Remote Sensing Image Scene Classification Meets Deep Learning:
Challenges, Methods, Benchmarks, and Opportunities [81.29441139530844]
This paper provides a systematic survey of deep learning methods for remote sensing image scene classification by covering more than 160 papers.
We discuss the main challenges of remote sensing image scene classification and survey.
We introduce the benchmarks used for remote sensing image scene classification and summarize the performance of more than two dozen representative algorithms.
arXiv Detail & Related papers (2020-05-03T14:18:00Z) - BP-DIP: A Backprojection based Deep Image Prior [49.375539602228415]
We propose two image restoration approaches: (i) Deep Image Prior (DIP), which trains a convolutional neural network (CNN) from scratch in test time using the degraded image; and (ii) a backprojection (BP) fidelity term, which is an alternative to the standard least squares loss that is usually used in previous DIP works.
We demonstrate the performance of the proposed method, termed BP-DIP, on the deblurring task and show its advantages over the plain DIP, with both higher PSNR values and better inference run-time.
arXiv Detail & Related papers (2020-03-11T17:09:12Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.