Improving the Detection of Burnt Areas in Remote Sensing using
Hyper-features Evolved by M3GP
- URL: http://arxiv.org/abs/2002.00053v1
- Date: Fri, 31 Jan 2020 20:42:15 GMT
- Title: Improving the Detection of Burnt Areas in Remote Sensing using
Hyper-features Evolved by M3GP
- Authors: Jo\~ao E. Batista, Sara Silva
- Abstract summary: Intending to improve remote sensing models for the classification of burnt areas, we set two objectives.
The first is to understand the relationship between feature spaces and the predictive ability of the models, allowing us to explain the differences between learning and generalization.
We find that training on datasets built from more than one image provides models that generalize better.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One problem found when working with satellite images is the radiometric
variations across the image and different images. Intending to improve remote
sensing models for the classification of burnt areas, we set two objectives.
The first is to understand the relationship between feature spaces and the
predictive ability of the models, allowing us to explain the differences
between learning and generalization when training and testing in different
datasets. We find that training on datasets built from more than one image
provides models that generalize better. These results are explained by
visualizing the dispersion of values on the feature space. The second objective
is to evolve hyper-features that improve the performance of different
classifiers on a variety of test sets. We find the hyper-features to be
beneficial, and obtain the best models with XGBoost, even if the hyper-features
are optimized for a different method.
Related papers
- Synergy and Diversity in CLIP: Enhancing Performance Through Adaptive Backbone Ensembling [58.50618448027103]
Contrastive Language-Image Pretraining (CLIP) stands out as a prominent method for image representation learning.
This paper explores the differences across various CLIP-trained vision backbones.
Method achieves a remarkable increase in accuracy of up to 39.1% over the best single backbone.
arXiv Detail & Related papers (2024-05-27T12:59:35Z) - Spatially Optimized Compact Deep Metric Learning Model for Similarity Search [1.0015171648915433]
Similarity search is a crucial task where spatial features decide an important output.
This study demonstrates that utilizing a single layer of involution feature extractor alongside a compact convolution model significantly enhances the performance of similarity search.
arXiv Detail & Related papers (2024-04-09T19:49:01Z) - Superpixel Graph Contrastive Clustering with Semantic-Invariant
Augmentations for Hyperspectral Images [64.72242126879503]
Hyperspectral images (HSI) clustering is an important but challenging task.
We first use 3-D and 2-D hybrid convolutional neural networks to extract the high-order spatial and spectral features of HSI.
We then design a superpixel graph contrastive clustering model to learn discriminative superpixel representations.
arXiv Detail & Related papers (2024-03-04T07:40:55Z) - Unsupervised convolutional neural network fusion approach for change
detection in remote sensing images [1.892026266421264]
We introduce a completely unsupervised shallow convolutional neural network (USCNN) fusion approach for change detection.
Our model has three features: the entire training process is conducted in an unsupervised manner, the network architecture is shallow, and the objective function is sparse.
Experimental results on four real remote sensing datasets indicate the feasibility and effectiveness of the proposed approach.
arXiv Detail & Related papers (2023-11-07T03:10:17Z) - RGM: A Robust Generalizable Matching Model [49.60975442871967]
We propose a deep model for sparse and dense matching, termed RGM (Robust Generalist Matching)
To narrow the gap between synthetic training samples and real-world scenarios, we build a new, large-scale dataset with sparse correspondence ground truth.
We are able to mix up various dense and sparse matching datasets, significantly improving the training diversity.
arXiv Detail & Related papers (2023-10-18T07:30:08Z) - A Dual Attentive Generative Adversarial Network for Remote Sensing Image
Change Detection [6.906936669510404]
We propose a dual attentive generative adversarial network for achieving very high-resolution remote sensing image change detection tasks.
The DAGAN framework has better performance with 85.01% mean IoU and 91.48% mean F1 score than advanced methods on the LEVIR dataset.
arXiv Detail & Related papers (2023-10-03T08:26:27Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Deep Diversity-Enhanced Feature Representation of Hyperspectral Images [87.47202258194719]
We rectify 3D convolution by modifying its topology to enhance the rank upper-bound.
We also propose a novel diversity-aware regularization (DA-Reg) term that acts on the feature maps to maximize independence among elements.
To demonstrate the superiority of the proposed Re$3$-ConvSet and DA-Reg, we apply them to various HS image processing and analysis tasks.
arXiv Detail & Related papers (2023-01-15T16:19:18Z) - IDAN: Image Difference Attention Network for Change Detection [3.5366052026723547]
We propose a novel image difference attention network (IDAN) for remote sensing image change detection.
IDAN considers the differences in regional and edge features of images and thus optimize the extracted image features.
The experimental results demonstrate that the F1-score of IDAN improves 1.62% and 1.98% compared to the baseline model on WHU dataset and LEVIR-CD dataset.
arXiv Detail & Related papers (2022-08-17T13:46:13Z) - CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented
Object Detection in Remote Sensing Images [0.9462808515258465]
In this paper, we discuss the role of discriminative features in object detection.
We then propose a Critical Feature Capturing Network (CFC-Net) to improve detection accuracy.
We show that our method achieves superior detection performance compared with many state-of-the-art approaches.
arXiv Detail & Related papers (2021-01-18T02:31:09Z) - Multi-scale Interactive Network for Salient Object Detection [91.43066633305662]
We propose the aggregate interaction modules to integrate the features from adjacent levels.
To obtain more efficient multi-scale features, the self-interaction modules are embedded in each decoder unit.
Experimental results on five benchmark datasets demonstrate that the proposed method without any post-processing performs favorably against 23 state-of-the-art approaches.
arXiv Detail & Related papers (2020-07-17T15:41:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.