MapInWild: A Remote Sensing Dataset to Address the Question What Makes
Nature Wild
- URL: http://arxiv.org/abs/2212.02265v1
- Date: Mon, 5 Dec 2022 13:45:06 GMT
- Title: MapInWild: A Remote Sensing Dataset to Address the Question What Makes
Nature Wild
- Authors: Burak Ekim, Timo T. Stomberg, Ribana Roscher, Michael Schmitt
- Abstract summary: We introduce the task of wilderness mapping by means of machine learning applied to satellite imagery.
We publish MapInWild, a large-scale benchmark curated dataset for that task.
The dataset consists of 8144 images with a shape of 1920 x 1920 pixels and is approximately 350 GB in size.
- Score: 4.42251021399814
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Antrophonegic pressure (i.e. human influence) on the environment is one of
the largest causes of the loss of biological diversity. Wilderness areas, in
contrast, are home to undisturbed ecological processes. However, there is no
biophysical definition of the term wilderness. Instead, wilderness is more of a
philosophical or cultural concept and thus cannot be easily delineated or
categorized in a technical manner. With this paper, (i) we introduce the task
of wilderness mapping by means of machine learning applied to satellite imagery
(ii) and publish MapInWild, a large-scale benchmark dataset curated for that
task. MapInWild is a multi-modal dataset and comprises various geodata acquired
and formed from a diverse set of Earth observation sensors. The dataset
consists of 8144 images with a shape of 1920 x 1920 pixels and is approximately
350 GB in size. The images are weakly annotated with three classes derived from
the World Database of Protected Areas - Strict Nature Reserves, Wilderness
Areas, and National Parks. With the dataset, which shall serve as a testbed for
developments in fields such as explainable machine learning and environmental
remote sensing, we hope to contribute to a deepening of our understanding of
the question "What makes nature wild?".
Related papers
- 360 in the Wild: Dataset for Depth Prediction and View Synthesis [66.58513725342125]
We introduce a large scale 360$circ$ videos dataset in the wild.
This dataset has been carefully scraped from the Internet and has been captured from various locations worldwide.
Each of the 25K images constituting our dataset is provided with its respective camera's pose and depth map.
arXiv Detail & Related papers (2024-06-27T05:26:38Z) - SatBird: Bird Species Distribution Modeling with Remote Sensing and
Citizen Science Data [68.2366021016172]
We present SatBird, a satellite dataset of locations in the USA with labels derived from presence-absence observation data from the citizen science database eBird.
We also provide a dataset in Kenya representing low-data regimes.
We benchmark a set of baselines on our dataset, including SOTA models for remote sensing tasks.
arXiv Detail & Related papers (2023-11-02T02:00:27Z) - MultiEarth 2023 -- Multimodal Learning for Earth and Environment
Workshop and Challenge [17.549467886161857]
MultiEarth 2023 is the second annual CVPR workshop aimed at the monitoring and analysis of the health of Earth ecosystems.
This paper presents the challenge guidelines, datasets, and evaluation metrics.
arXiv Detail & Related papers (2023-06-07T19:20:01Z) - 3D Clothed Human Reconstruction in the Wild [67.35107130310257]
ClothWild is a 3D clothed human reconstruction framework that addresses the robustness on in-the-wild images.
We propose a weakly supervised pipeline that is trainable with 2D supervision targets of in-the-wild datasets.
Our proposed ClothWild produces much more accurate and robust results than the state-of-the-art methods.
arXiv Detail & Related papers (2022-07-20T17:33:19Z) - Exploring Wilderness Using Explainable Machine Learning in Satellite
Imagery [2.823072545762534]
Wilderness areas offer important ecological and social benefits, and therefore warrant monitoring and preservation.
In this article, we explore the characteristics and appearance of the vague concept of wilderness areas via multispectral satellite imagery.
We apply a novel explainable machine learning technique on a curated dataset, which is sophisticated for the task to investigate wild and anthropogenic areas in Fennoscandia.
arXiv Detail & Related papers (2022-03-01T11:51:49Z) - Florida Wildlife Camera Trap Dataset [48.99466876948454]
We introduce a challenging wildlife camera trap classification dataset collected from two different locations in Southwestern Florida.
The dataset consists of 104,495 images featuring visually similar species, varying illumination conditions, skewed class distribution, and including samples of endangered species.
arXiv Detail & Related papers (2021-06-23T18:53:15Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - I-Nema: A Biological Image Dataset for Nematode Recognition [3.1918817988202606]
Nematode worms are one of most abundant metazoan groups on the earth, occupying diverse ecological niches.
Accurate recognition or identification of nematodes are of great importance for pest control, soil ecology, bio-geography, habitat conservation and against climate changes.
Computer vision and image processing have witnessed a few successes in species recognition of nematodes; however, it is still in great demand.
arXiv Detail & Related papers (2021-03-15T12:29:37Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.