A General Purpose Neural Architecture for Geospatial Systems
- URL: http://arxiv.org/abs/2211.02348v1
- Date: Fri, 4 Nov 2022 09:58:57 GMT
- Title: A General Purpose Neural Architecture for Geospatial Systems
- Authors: Nasim Rahaman and Martin Weiss and Frederik Tr\"auble and Francesco
Locatello and Alexandre Lacoste and Yoshua Bengio and Chris Pal and Li Erran
Li and Bernhard Sch\"olkopf
- Abstract summary: We present a roadmap towards the construction of a general-purpose neural architecture (GPNA) with a geospatial inductive bias.
We envision how such a model may facilitate cooperation between members of the community.
- Score: 142.43454584836812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Geospatial Information Systems are used by researchers and Humanitarian
Assistance and Disaster Response (HADR) practitioners to support a wide variety
of important applications. However, collaboration between these actors is
difficult due to the heterogeneous nature of geospatial data modalities (e.g.,
multi-spectral images of various resolutions, timeseries, weather data) and
diversity of tasks (e.g., regression of human activity indicators or detecting
forest fires). In this work, we present a roadmap towards the construction of a
general-purpose neural architecture (GPNA) with a geospatial inductive bias,
pre-trained on large amounts of unlabelled earth observation data in a
self-supervised manner. We envision how such a model may facilitate cooperation
between members of the community. We show preliminary results on the first step
of the roadmap, where we instantiate an architecture that can process a wide
variety of geospatial data modalities and demonstrate that it can achieve
competitive performance with domain-specific architectures on tasks relating to
the U.N.'s Sustainable Development Goals.
Related papers
- Geolocation with Real Human Gameplay Data: A Large-Scale Dataset and Human-Like Reasoning Framework [59.42946541163632]
We introduce a comprehensive geolocation framework with three key components.
GeoComp, a large-scale dataset; GeoCoT, a novel reasoning method; and GeoEval, an evaluation metric.
We demonstrate that GeoCoT significantly boosts geolocation accuracy by up to 25% while enhancing interpretability.
arXiv Detail & Related papers (2025-02-19T14:21:25Z) - PEACE: Empowering Geologic Map Holistic Understanding with MLLMs [64.58959634712215]
Geologic map, as a fundamental diagram in geology science, provides critical insights into the structure and composition of Earth's subsurface and surface.
Despite their significance, current Multimodal Large Language Models (MLLMs) often fall short in geologic map understanding.
To quantify this gap, we construct GeoMap-Bench, the first-ever benchmark for evaluating MLLMs in geologic map understanding.
arXiv Detail & Related papers (2025-01-10T18:59:42Z) - A comprehensive GeoAI review: Progress, Challenges and Outlooks [0.0]
Geospatial Artificial Intelligence (GeoAI) has gained traction in the most relevant research works and industrial applications.
This paper offers a comprehensive review of GeoAI as a synergistic concept applying Artificial Intelligence (AI) methods and models to geospatial data.
arXiv Detail & Related papers (2024-12-16T10:41:02Z) - General Geospatial Inference with a Population Dynamics Foundation Model [15.620351974173385]
Population Dynamics Foundation Model (PDFM) aims to capture relationships between diverse data modalities.
We first construct a geo-indexed dataset for postal codes and counties across the United States.
We then model this data and the complex relationships between locations using a graph neural network.
We combined the PDFM with a state-of-the-art forecasting foundation model, TimesFM, to predict unemployment and poverty.
arXiv Detail & Related papers (2024-11-11T18:32:44Z) - Swarm Intelligence in Geo-Localization: A Multi-Agent Large Vision-Language Model Collaborative Framework [51.26566634946208]
We introduce smileGeo, a novel visual geo-localization framework.
By inter-agent communication, smileGeo integrates the inherent knowledge of these agents with additional retrieved information.
Results show that our approach significantly outperforms current state-of-the-art methods.
arXiv Detail & Related papers (2024-08-21T03:31:30Z) - Towards Vision-Language Geo-Foundation Model: A Survey [65.70547895998541]
Vision-Language Foundation Models (VLFMs) have made remarkable progress on various multimodal tasks.
This paper thoroughly reviews VLGFMs, summarizing and analyzing recent developments in the field.
arXiv Detail & Related papers (2024-06-13T17:57:30Z) - On the Opportunities and Challenges of Foundation Models for Geospatial
Artificial Intelligence [39.86997089245117]
Foundations models (FMs) can be adapted to a wide range of downstream tasks by fine-tuning, few-shot, or zero-shot learning.
We propose that one of the major challenges of developing a FM for GeoAI is to address the multimodality nature of geospatial tasks.
arXiv Detail & Related papers (2023-04-13T19:50:17Z) - GeoNet: Benchmarking Unsupervised Adaptation across Geographies [71.23141626803287]
We study the problem of geographic robustness and make three main contributions.
First, we introduce a large-scale dataset GeoNet for geographic adaptation.
Second, we hypothesize that the major source of domain shifts arise from significant variations in scene context.
Third, we conduct an extensive evaluation of several state-of-the-art unsupervised domain adaptation algorithms and architectures.
arXiv Detail & Related papers (2023-03-27T17:59:34Z) - Towards Geospatial Foundation Models via Continual Pretraining [22.825065739563296]
We propose a novel paradigm for building highly effective foundation models with minimal resource cost and carbon impact.
We first construct a compact yet diverse dataset from multiple sources to promote feature diversity, which we term GeoPile.
Then, we investigate the potential of continual pretraining from large-scale ImageNet-22k models and propose a multi-objective continual pretraining paradigm.
arXiv Detail & Related papers (2023-02-09T07:39:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.