Multi-Spectral Image Classification with Ultra-Lean Complex-Valued
Models
- URL: http://arxiv.org/abs/2211.11797v1
- Date: Mon, 21 Nov 2022 19:01:53 GMT
- Title: Multi-Spectral Image Classification with Ultra-Lean Complex-Valued
Models
- Authors: Utkarsh Singhal and Stella X. Yu and Zackery Steck and Scott Kangas
and Aaron A. Reite
- Abstract summary: Multi-spectral imagery is invaluable for remote sensing due to different spectral signatures exhibited by materials.
We apply complex-valued co-domain symmetric models to classify real-valued MSI images.
Our work is the first to demonstrate the value of complex-valued deep learning on real-valued MSI data.
- Score: 28.798100220715686
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-spectral imagery is invaluable for remote sensing due to different
spectral signatures exhibited by materials that often appear identical in
greyscale and RGB imagery. Paired with modern deep learning methods, this
modality has great potential utility in a variety of remote sensing
applications, such as humanitarian assistance and disaster recovery efforts.
State-of-the-art deep learning methods have greatly benefited from large-scale
annotations like in ImageNet, but existing MSI image datasets lack annotations
at a similar scale. As an alternative to transfer learning on such data with
few annotations, we apply complex-valued co-domain symmetric models to classify
real-valued MSI images. Our experiments on 8-band xView data show that our
ultra-lean model trained on xView from scratch without data augmentations can
outperform ResNet with data augmentation and modified transfer learning on
xView. Our work is the first to demonstrate the value of complex-valued deep
learning on real-valued MSI data.
Related papers
- Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation [53.95204595640208]
Data-Free Knowledge Distillation (DFKD) is an advanced technique that enables knowledge transfer from a teacher model to a student model without relying on original training data.
Previous approaches have generated synthetic images at high resolutions without leveraging information from real images.
MUSE generates images at lower resolutions while using Class Activation Maps (CAMs) to ensure that the generated images retain critical, class-specific features.
arXiv Detail & Related papers (2024-11-26T02:23:31Z) - Efficient Visual State Space Model for Image Deblurring [83.57239834238035]
Convolutional neural networks (CNNs) and Vision Transformers (ViTs) have achieved excellent performance in image restoration.
We propose a simple yet effective visual state space model (EVSSM) for image deblurring.
arXiv Detail & Related papers (2024-05-23T09:13:36Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - Cross-Scale MAE: A Tale of Multi-Scale Exploitation in Remote Sensing [5.325585142755542]
We present Cross-Scale MAE, a self-supervised model built upon the Masked Auto-Encoder (MAE).During pre-training, Cross-Scale MAE employs scale augmentation techniques and enforces cross-scale constraints through both contrastive and generative losses.
Experimental evaluations demonstrate that Cross-Scale MAE exhibits superior performance compared to standard MAE and other state-of-the-art remote sensing MAE methods.
arXiv Detail & Related papers (2024-01-29T03:06:19Z) - CtxMIM: Context-Enhanced Masked Image Modeling for Remote Sensing Image Understanding [38.53988682814626]
We propose a context-enhanced masked image modeling method (CtxMIM) for remote sensing image understanding.
CtxMIM formulates original image patches as a reconstructive template and employs a Siamese framework to operate on two sets of image patches.
With the simple and elegant design, CtxMIM encourages the pre-training model to learn object-level or pixel-level features on a large-scale dataset.
arXiv Detail & Related papers (2023-09-28T18:04:43Z) - Local Manifold Augmentation for Multiview Semantic Consistency [40.28906509638541]
We propose to extract the underlying data variation from datasets and construct a novel augmentation operator, named local manifold augmentation (LMA)
LMA shows the ability to create an infinite number of data views, preserve semantics, and simulate complicated variations in object pose, viewpoint, lighting condition, background etc.
arXiv Detail & Related papers (2022-11-05T02:00:13Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Remote Sensing Image Scene Classification with Self-Supervised Paradigm
under Limited Labeled Samples [11.025191332244919]
We introduce new self-supervised learning (SSL) mechanism to obtain the high-performance pre-training model for RSIs scene classification from large unlabeled data.
Experiments on three commonly used RSIs scene classification datasets demonstrated that this new learning paradigm outperforms the traditional dominant ImageNet pre-trained model.
The insights distilled from our studies can help to foster the development of SSL in the remote sensing community.
arXiv Detail & Related papers (2020-10-02T09:27:19Z) - X-ModalNet: A Semi-Supervised Deep Cross-Modal Network for
Classification of Remote Sensing Data [69.37597254841052]
We propose a novel cross-modal deep-learning framework called X-ModalNet.
X-ModalNet generalizes well, owing to propagating labels on an updatable graph constructed by high-level features on the top of the network.
We evaluate X-ModalNet on two multi-modal remote sensing datasets (HSI-MSI and HSI-SAR) and achieve a significant improvement in comparison with several state-of-the-art methods.
arXiv Detail & Related papers (2020-06-24T15:29:41Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.