Towards Explaining Satellite Based Poverty Predictions with
Convolutional Neural Networks
- URL: http://arxiv.org/abs/2312.00416v1
- Date: Fri, 1 Dec 2023 08:40:09 GMT
- Title: Towards Explaining Satellite Based Poverty Predictions with
Convolutional Neural Networks
- Authors: Hamid Sarmadi, Thorsteinn R\"ognvaldsson, Nils Roger Carlsson, Mattias
Ohlsson, Ibrahim Wahab, Ola Hall
- Abstract summary: Deep convolutional neural networks (CNNs) have been shown to predict poverty and development indicators from satellite images with surprising accuracy.
This paper presents a first attempt at analyzing the CNNs responses in detail and explaining the basis for the predictions.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks (CNNs) have been shown to predict poverty
and development indicators from satellite images with surprising accuracy. This
paper presents a first attempt at analyzing the CNNs responses in detail and
explaining the basis for the predictions. The CNN model, while trained on
relatively low resolution day- and night-time satellite images, is able to
outperform human subjects who look at high-resolution images in ranking the
Wealth Index categories. Multiple explainability experiments performed on the
model indicate the importance of the sizes of the objects, pixel colors in the
image, and provide a visualization of the importance of different structures in
input images. A visualization is also provided of type images that maximize the
network prediction of Wealth Index, which provides clues on what the CNN
prediction is based on.
Related papers
- CNN2GNN: How to Bridge CNN with GNN [59.42117676779735]
We propose a novel CNN2GNN framework to unify CNN and GNN together via distillation.
The performance of distilled boosted'' two-layer GNN on Mini-ImageNet is much higher than CNN containing dozens of layers such as ResNet152.
arXiv Detail & Related papers (2024-04-23T08:19:08Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - Explaining Deep Convolutional Neural Networks for Image Classification
by Evolving Local Interpretable Model-agnostic Explanations [7.474973880539888]
The proposed method is model-agnostic, i.e., it can be utilised to explain any deep convolutional neural network models.
The evolved local explanations on four images, randomly selected from ImageNet, are presented.
The proposed method can obtain local explanations within one minute, which is more than ten times faster than LIME.
arXiv Detail & Related papers (2022-11-28T08:56:00Z) - Evaluation of Pre-Trained CNN Models for Geographic Fake Image Detection [20.41074415307636]
We are witnessing the emergence of fake satellite images, which can be misleading or even threatening to national security.
We explore the suitability of several convolutional neural network (CNN) architectures for fake satellite image detection.
This work allows the establishment of new baselines and may be useful for the development of CNN-based methods for fake satellite image detection.
arXiv Detail & Related papers (2022-10-01T20:37:24Z) - Segmentation of Roads in Satellite Images using specially modified U-Net
CNNs [0.0]
The aim of this paper is to build an image classifier for satellite images of urban scenes that identifies the portions of the images in which a road is located.
Unlike conventional computer vision algorithms, convolutional neural networks (CNNs) provide accurate and reliable results on this task.
arXiv Detail & Related papers (2021-09-29T19:08:32Z) - Convolutional Neural Networks Demystified: A Matched Filtering
Perspective Based Tutorial [7.826806223782053]
Convolutional Neural Networks (CNN) are a de-facto standard for the analysis of large volumes of signals and images.
We revisit their operation from first principles and a matched filtering perspective.
It is our hope that this tutorial will help shed new light and physical intuition into the understanding and further development of deep neural networks.
arXiv Detail & Related papers (2021-08-26T09:07:49Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Probabilistic Graph Attention Network with Conditional Kernels for
Pixel-Wise Prediction [158.88345945211185]
We present a novel approach that advances the state of the art on pixel-level prediction in a fundamental aspect, i.e. structured multi-scale features learning and fusion.
We propose a probabilistic graph attention network structure based on a novel Attention-Gated Conditional Random Fields (AG-CRFs) model for learning and fusing multi-scale representations in a principled manner.
arXiv Detail & Related papers (2021-01-08T04:14:29Z) - A Singular Value Perspective on Model Robustness [14.591622269748974]
We show that naturally trained and adversarially robust CNNs exploit highly different features for the same dataset.
We propose Rank Integrated Gradients (RIG), the first rank-based feature attribution method to understand the dependence of CNNs on image rank.
arXiv Detail & Related papers (2020-12-07T08:09:07Z) - Learning Deep Interleaved Networks with Asymmetric Co-Attention for
Image Restoration [65.11022516031463]
We present a deep interleaved network (DIN) that learns how information at different states should be combined for high-quality (HQ) images reconstruction.
In this paper, we propose asymmetric co-attention (AsyCA) which is attached at each interleaved node to model the feature dependencies.
Our presented DIN can be trained end-to-end and applied to various image restoration tasks.
arXiv Detail & Related papers (2020-10-29T15:32:00Z) - Attentive Graph Neural Networks for Few-Shot Learning [74.01069516079379]
Graph Neural Networks (GNN) has demonstrated the superior performance in many challenging applications, including the few-shot learning tasks.
Despite its powerful capacity to learn and generalize the model from few samples, GNN usually suffers from severe over-fitting and over-smoothing as the model becomes deep.
We propose a novel Attentive GNN to tackle these challenges, by incorporating a triple-attention mechanism.
arXiv Detail & Related papers (2020-07-14T07:43:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.