SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing
imagery using HRNet with Separable Convolution
- URL: http://arxiv.org/abs/2307.05700v1
- Date: Tue, 11 Jul 2023 18:07:25 GMT
- Title: SepHRNet: Generating High-Resolution Crop Maps from Remote Sensing
imagery using HRNet with Separable Convolution
- Authors: Priyanka Goyal, Sohan Patnaik, Adway Mitra, Manjira Sinha
- Abstract summary: We propose a novel Deep learning approach that integrates HRNet with Separable Convolutional layers to capture spatial patterns and Self-attention to capture temporal patterns of the data.
The proposed algorithm achieves a high classification accuracy of 97.5% and IoU of 55.2% in generating crop maps.
- Score: 3.717258819781834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The accurate mapping of crop production is crucial for ensuring food
security, effective resource management, and sustainable agricultural
practices. One way to achieve this is by analyzing high-resolution satellite
imagery. Deep Learning has been successful in analyzing images, including
remote sensing imagery. However, capturing intricate crop patterns is
challenging due to their complexity and variability. In this paper, we propose
a novel Deep learning approach that integrates HRNet with Separable
Convolutional layers to capture spatial patterns and Self-attention to capture
temporal patterns of the data. The HRNet model acts as a backbone and extracts
high-resolution features from crop images. Spatially separable convolution in
the shallow layers of the HRNet model captures intricate crop patterns more
effectively while reducing the computational cost. The multi-head attention
mechanism captures long-term temporal dependencies from the encoded vector
representation of the images. Finally, a CNN decoder generates a crop map from
the aggregated representation. Adaboost is used on top of this to further
improve accuracy. The proposed algorithm achieves a high classification
accuracy of 97.5\% and IoU of 55.2\% in generating crop maps. We evaluate the
performance of our pipeline on the Zuericrop dataset and demonstrate that our
results outperform state-of-the-art models such as U-Net++, ResNet50, VGG19,
InceptionV3, DenseNet, and EfficientNet. This research showcases the potential
of Deep Learning for Earth Observation Systems.
Related papers
- SODAWideNet++: Combining Attention and Convolutions for Salient Object Detection [3.2586315449885106]
We propose a novel encoder-decoder-style neural network called SODAWideNet++ designed explicitly for Salient Object Detection.
Inspired by the vision transformers ability to attain a global receptive field from the initial stages, we introduce the Attention Guided Long Range Feature Extraction (AGLRFE) module.
In contrast to the current paradigm of ImageNet pre-training, we modify 118K annotated images from the COCO semantic segmentation dataset by binarizing the annotations to pre-train the proposed model end-to-end.
arXiv Detail & Related papers (2024-08-29T15:51:06Z) - DA-HFNet: Progressive Fine-Grained Forgery Image Detection and Localization Based on Dual Attention [12.36906630199689]
We construct a DA-HFNet forged image dataset guided by text or image-assisted GAN and Diffusion model.
Our goal is to utilize a hierarchical progressive network to capture forged artifacts at different scales for detection and localization.
arXiv Detail & Related papers (2024-06-03T16:13:33Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - Leveraging High-Resolution Features for Improved Deep Hashing-based Image Retrieval [0.10923877073891444]
We propose a novel methodology that utilizes High-Resolution Networks (HRNets) as the backbone for the deep hashing task, termed High-Resolution Hashing Network (HHNet)
Our approach demonstrates superior performance compared to existing methods across all tested benchmark datasets, including CIFAR-10, NUS-WIDE, MS COCO, and ImageNet.
arXiv Detail & Related papers (2024-03-20T16:54:55Z) - Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for
Advanced Object Detection [55.2480439325792]
We present an in-depth evaluation of an object detection model that integrates the LSKNet backbone with the DiffusionDet head.
The proposed model achieves a mean average precision (MAP) of approximately 45.7%, which is a significant improvement.
This advancement underscores the effectiveness of the proposed modifications and sets a new benchmark in aerial image analysis.
arXiv Detail & Related papers (2023-11-21T19:49:13Z) - Robustifying Deep Vision Models Through Shape Sensitization [19.118696557797957]
We propose a simple, lightweight adversarial augmentation technique that explicitly incentivizes the network to learn holistic shapes.
Our augmentations superpose edgemaps from one image onto another image with shuffled patches, using a randomly determined mixing proportion.
We show that our augmentations significantly improve classification accuracy and robustness measures on a range of datasets and neural architectures.
arXiv Detail & Related papers (2022-11-14T11:17:46Z) - Sci-Net: a Scale Invariant Model for Building Detection from Aerial
Images [0.0]
We propose a Scale-invariant neural network (Sci-Net) that is able to segment buildings present in aerial images at different spatial resolutions.
Specifically, we modified the U-Net architecture and fused it with dense Atrous Spatial Pyramid Pooling (ASPP) to extract fine-grained multi-scale representations.
arXiv Detail & Related papers (2021-11-12T16:45:20Z) - Pairwise Relation Learning for Semi-supervised Gland Segmentation [90.45303394358493]
We propose a pairwise relation-based semi-supervised (PRS2) model for gland segmentation on histology images.
This model consists of a segmentation network (S-Net) and a pairwise relation network (PR-Net)
We evaluate our model against five recent methods on the GlaS dataset and three recent methods on the CRAG dataset.
arXiv Detail & Related papers (2020-08-06T15:02:38Z) - From ImageNet to Image Classification: Contextualizing Progress on
Benchmarks [99.19183528305598]
We study how specific design choices in the ImageNet creation process impact the fidelity of the resulting dataset.
Our analysis pinpoints how a noisy data collection pipeline can lead to a systematic misalignment between the resulting benchmark and the real-world task it serves as a proxy for.
arXiv Detail & Related papers (2020-05-22T17:39:16Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.