Satellite Image and Machine Learning based Knowledge Extraction in the
Poverty and Welfare Domain
- URL: http://arxiv.org/abs/2203.01068v1
- Date: Wed, 2 Mar 2022 12:38:20 GMT
- Title: Satellite Image and Machine Learning based Knowledge Extraction in the
Poverty and Welfare Domain
- Authors: Ola Hall, Mattias Ohlsson and Thortseinn R\"ognvaldsson
- Abstract summary: We review the literature focusing on three core elements relevant in this context: transparency, interpretability, and explainability.
We argue that explainability is essential to support wider dissemination and acceptance of this research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in artificial intelligence and machine learning have created
a step change in how to measure human development indicators, in particular
asset based poverty. The combination of satellite imagery and machine learning
has the capability to estimate poverty at a level similar to what is achieved
with workhorse methods such as face-to-face interviews and household surveys.
An increasingly important issue beyond static estimations is whether this
technology can contribute to scientific discovery and consequently new
knowledge in the poverty and welfare domain. A foundation for achieving
scientific insights is domain knowledge, which in turn translates into
explainability and scientific consistency. We review the literature focusing on
three core elements relevant in this context: transparency, interpretability,
and explainability and investigate how they relates to the poverty, machine
learning and satellite imagery nexus. Our review of the field shows that the
status of the three core elements of explainable machine learning
(transparency, interpretability and domain knowledge) is varied and does not
completely fulfill the requirements set up for scientific insights and
discoveries. We argue that explainability is essential to support wider
dissemination and acceptance of this research, and explainability means more
than just interpretability.
Related papers
- Fairness and Bias Mitigation in Computer Vision: A Survey [61.01658257223365]
Computer vision systems are increasingly being deployed in high-stakes real-world applications.
There is a dire need to ensure that they do not propagate or amplify any discriminatory tendencies in historical or human-curated data.
This paper presents a comprehensive survey on fairness that summarizes and sheds light on ongoing trends and successes in the context of computer vision.
arXiv Detail & Related papers (2024-08-05T13:44:22Z) - Towards a Benchmark for Scientific Understanding in Humans and Machines [2.714583452862024]
We propose a framework to create a benchmark for scientific understanding, utilizing tools from philosophy of science.
We adopt a behavioral notion according to which genuine understanding should be recognized as an ability to perform certain tasks.
arXiv Detail & Related papers (2023-04-20T14:05:53Z) - Mind the Gap! Bridging Explainable Artificial Intelligence and Human Understanding with Luhmann's Functional Theory of Communication [5.742215677251865]
We apply social systems theory to highlight challenges in explainable artificial intelligence.
We aim to reinvigorate the technical research in the direction of interactive and iterative explainers.
arXiv Detail & Related papers (2023-02-07T13:31:02Z) - Beyond Interpretable Benchmarks: Contextual Learning through Cognitive
and Multimodal Perception [0.0]
This study contends that the Turing Test is misinterpreted as an attempt to anthropomorphize computer systems.
It emphasizes tacit learning as a cornerstone of general-purpose intelligence, despite its lack of overt interpretability.
arXiv Detail & Related papers (2022-12-04T08:30:04Z) - Explainability Is in the Mind of the Beholder: Establishing the
Foundations of Explainable Artificial Intelligence [11.472707084860875]
We define explainability as (logical) reasoning applied to transparent insights (into black boxes) interpreted under certain background knowledge.
We revisit the trade-off between transparency and predictive power and its implications for ante-hoc and post-hoc explainers.
We discuss components of the machine learning workflow that may be in need of interpretability, building on a range of ideas from human-centred explainability.
arXiv Detail & Related papers (2021-12-29T09:21:33Z) - Scientia Potentia Est -- On the Role of Knowledge in Computational
Argumentation [52.903665881174845]
We propose a pyramid of types of knowledge required in computational argumentation.
We briefly discuss the state of the art on the role and integration of these types in the field.
arXiv Detail & Related papers (2021-07-01T08:12:41Z) - Knowledge as Invariance -- History and Perspectives of
Knowledge-augmented Machine Learning [69.99522650448213]
Research in machine learning is at a turning point.
Research interests are shifting away from increasing the performance of highly parameterized models to exceedingly specific tasks.
This white paper provides an introduction and discussion of this emerging field in machine learning research.
arXiv Detail & Related papers (2020-12-21T15:07:19Z) - A Data-Driven Study of Commonsense Knowledge using the ConceptNet
Knowledge Base [8.591839265985412]
Acquiring commonsense knowledge and reasoning is recognized as an important frontier in achieving general Artificial Intelligence (AI)
In this paper, we propose and conduct a systematic study to enable a deeper understanding of commonsense knowledge by doing an empirical and structural analysis of the ConceptNet knowledge base.
Detailed experimental results on three carefully designed research questions, using state-of-the-art unsupervised graph representation learning ('embedding') and clustering techniques, reveal deep substructures in ConceptNet relations.
arXiv Detail & Related papers (2020-11-28T08:08:25Z) - Neuro-symbolic Architectures for Context Understanding [59.899606495602406]
We propose the use of hybrid AI methodology as a framework for combining the strengths of data-driven and knowledge-driven approaches.
Specifically, we inherit the concept of neuro-symbolism as a way of using knowledge-bases to guide the learning progress of deep neural networks.
arXiv Detail & Related papers (2020-03-09T15:04:07Z) - Generating Interpretable Poverty Maps using Object Detection in
Satellite Images [80.35540308137043]
We demonstrate an interpretable computational framework to accurately predict poverty at a local level by applying object detectors to satellite images.
Using the weighted counts of objects as features, we achieve 0.539 Pearson's r2 in predicting village-level poverty in Uganda, a 31% improvement over existing (and less interpretable) benchmarks.
arXiv Detail & Related papers (2020-02-05T02:50:01Z) - A Review on Intelligent Object Perception Methods Combining
Knowledge-based Reasoning and Machine Learning [60.335974351919816]
Object perception is a fundamental sub-field of Computer Vision.
Recent works seek ways to integrate knowledge engineering in order to expand the level of intelligence of the visual interpretation of objects.
arXiv Detail & Related papers (2019-12-26T13:26:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.