Impacts of Color and Texture Distortions on Earth Observation Data in Deep Learning
- URL: http://arxiv.org/abs/2403.04385v2
- Date: Fri, 12 Apr 2024 10:15:45 GMT
- Title: Impacts of Color and Texture Distortions on Earth Observation Data in Deep Learning
- Authors: Martin Willbo, Aleksis Pirinen, John Martinsson, Edvin Listo Zec, Olof Mogren, Mikael Nilsson,
- Abstract summary: Land cover classification and change detection are important applications of remote sensing and Earth observation.
We show that the influence of different visual characteristics of the input EO data on a model's predictions is not well understood.
We conduct experiments with multiple state-of-the-art segmentation networks for land cover classification and show that they are in general more sensitive to texture than to color distortions.
- Score: 5.128534415575421
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Land cover classification and change detection are two important applications of remote sensing and Earth observation (EO) that have benefited greatly from the advances of deep learning. Convolutional and transformer-based U-net models are the state-of-the-art architectures for these tasks, and their performances have been boosted by an increased availability of large-scale annotated EO datasets. However, the influence of different visual characteristics of the input EO data on a model's predictions is not well understood. In this work we systematically examine model sensitivities with respect to several color- and texture-based distortions on the input EO data during inference, given models that have been trained without such distortions. We conduct experiments with multiple state-of-the-art segmentation networks for land cover classification and show that they are in general more sensitive to texture than to color distortions. Beyond revealing intriguing characteristics of widely used land cover classification models, our results can also be used to guide the development of more robust models within the EO domain.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - The Importance of Model Inspection for Better Understanding Performance Characteristics of Graph Neural Networks [15.569758991934934]
We investigate the effect of modelling choices on the feature learning characteristics of graph neural networks applied to a brain shape classification task.
We find substantial differences in the feature embeddings at different layers of the models.
arXiv Detail & Related papers (2024-05-02T13:26:18Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Quantifying Overfitting: Introducing the Overfitting Index [0.0]
Overfitting is where a model exhibits superior performance on training data but falters on unseen data.
This paper introduces the Overfitting Index (OI), a novel metric devised to quantitatively assess a model's tendency to overfit.
Our results underscore the variable overfitting behaviors across architectures and highlight the mitigative impact of data augmentation.
arXiv Detail & Related papers (2023-08-16T21:32:57Z) - Studying How to Efficiently and Effectively Guide Models with Explanations [52.498055901649025]
'Model guidance' is the idea of regularizing the models' explanations to ensure that they are "right for the right reasons"
We conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Specifically, we guide the models via bounding box annotations, which are much cheaper to obtain than the commonly used segmentation masks.
arXiv Detail & Related papers (2023-03-21T15:34:50Z) - Fact or Artifact? Revise Layer-wise Relevance Propagation on various ANN
Architectures [0.0]
Layer-wise relevance propagation (LRP) is a powerful technique to reveal insights into various artificial neural network (ANN) architectures.
We show techniques to control model focus and give guidance to improve the quality of obtained relevance maps to separate facts from artifacts.
arXiv Detail & Related papers (2023-02-23T20:26:58Z) - A Study on the Generality of Neural Network Structures for Monocular
Depth Estimation [14.09373215954704]
We deeply investigate the various backbone networks toward the generalization of monocular depth estimation.
We evaluate state-of-the-art models on both in-distribution and out-of-distribution datasets.
We observe that the Transformers exhibit a strong shape-bias rather than CNNs, which have a strong texture-bias.
arXiv Detail & Related papers (2023-01-09T04:58:12Z) - CHALLENGER: Training with Attribution Maps [63.736435657236505]
We show that utilizing attribution maps for training neural networks can improve regularization of models and thus increase performance.
In particular, we show that our generic domain-independent approach yields state-of-the-art results in vision, natural language processing and on time series tasks.
arXiv Detail & Related papers (2022-05-30T13:34:46Z) - Investigating classification learning curves for automatically generated
and labelled plant images [0.1338174941551702]
We present a dataset of plant images with representatives of crops and weeds common to the Manitoba prairies at different growth stages.
We determine the learning curve for a classification task on this data with the ResNet architecture.
We investigate how label noise and the reduction of trainable parameters impacts the learning curve on this dataset.
arXiv Detail & Related papers (2022-05-22T23:28:42Z) - Stereopagnosia: Fooling Stereo Networks with Adversarial Perturbations [71.00754846434744]
We show that imperceptible additive perturbations can significantly alter the disparity map.
We show that, when used for adversarial data augmentation, our perturbations result in trained models that are more robust.
arXiv Detail & Related papers (2020-09-21T19:20:09Z) - RGB-D Salient Object Detection: A Survey [195.83586883670358]
We provide a comprehensive survey of RGB-D based SOD models from various perspectives.
We also review SOD models and popular benchmark datasets from this domain.
We discuss several challenges and open directions of RGB-D based SOD for future research.
arXiv Detail & Related papers (2020-08-01T10:01:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.