Investigating the Segment Anything Foundation Model for Mapping Smallholder Agriculture Field Boundaries Without Training Labels
- URL: http://arxiv.org/abs/2407.01846v1
- Date: Mon, 1 Jul 2024 23:06:02 GMT
- Title: Investigating the Segment Anything Foundation Model for Mapping Smallholder Agriculture Field Boundaries Without Training Labels
- Authors: Pratyush Tripathy, Kathy Baylis, Kyle Wu, Jyles Watson, Ruizhe Jiang,
- Abstract summary: This study explores the Segment Anything Model (SAM) to delineate agricultural field boundaries in Bihar, India.
We evaluate SAM's performance across three model checkpoints, various input sizes, multi-date satellite images, and edge-enhanced imagery.
Using different input image sizes improves accuracy, with the most significant improvement observed when using multi-date satellite images.
- Score: 0.24966046892475396
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Accurate mapping of agricultural field boundaries is crucial for enhancing outcomes like precision agriculture, crop monitoring, and yield estimation. However, extracting these boundaries from satellite images is challenging, especially for smallholder farms and data-scarce environments. This study explores the Segment Anything Model (SAM) to delineate agricultural field boundaries in Bihar, India, using 2-meter resolution SkySat imagery without additional training. We evaluate SAM's performance across three model checkpoints, various input sizes, multi-date satellite images, and edge-enhanced imagery. Our results show that SAM correctly identifies about 58% of field boundaries, comparable to other approaches requiring extensive training data. Using different input image sizes improves accuracy, with the most significant improvement observed when using multi-date satellite images. This work establishes proof of concept for using SAM and maximizing its potential in agricultural field boundary mapping. Our work highlights SAM's potential in delineating agriculture field boundary in training-data scarce settings to enable a wide range of agriculture related analysis.
Related papers
- Multi-Region Transfer Learning for Segmentation of Crop Field Boundaries in Satellite Images with Limited Labels [6.79949280366368]
We present an approach for segmentation of crop field boundaries in satellite images in regions lacking labeled data.
We show that our approach outperforms existing methods and that multi-region transfer learning substantially boosts performance for multiple model architectures.
arXiv Detail & Related papers (2024-03-29T22:24:12Z) - Generating Diverse Agricultural Data for Vision-Based Farming Applications [74.79409721178489]
This model is capable of simulating distinct growth stages of plants, diverse soil conditions, and randomized field arrangements under varying lighting conditions.
Our dataset includes 12,000 images with semantic labels, offering a comprehensive resource for computer vision tasks in precision agriculture.
arXiv Detail & Related papers (2024-03-27T08:42:47Z) - Can SAM recognize crops? Quantifying the zero-shot performance of a
semantic segmentation foundation model on generating crop-type maps using
satellite imagery for precision agriculture [4.825257766966091]
Crop-type maps are key information for decision-support tools.
We investigate the capabilities of Meta AI's Segment Anything Model (SAM) for crop-map prediction task.
SAM being limited to up-to 3 channel inputs and its zero-shot usage being class-agnostic in nature pose unique challenges in using it directly for crop-type mapping.
arXiv Detail & Related papers (2023-11-25T23:40:09Z) - HarvestNet: A Dataset for Detecting Smallholder Farming Activity Using
Harvest Piles and Remote Sensing [50.4506590177605]
HarvestNet is a dataset for mapping the presence of farms in the Ethiopian regions of Tigray and Amhara during 2020-2023.
We introduce a new approach based on the detection of harvest piles characteristic of many smallholder systems.
We conclude that remote sensing of harvest piles can contribute to more timely and accurate cropland assessments in food insecure regions.
arXiv Detail & Related papers (2023-08-23T11:03:28Z) - PhenoBench -- A Large Dataset and Benchmarks for Semantic Image Interpretation in the Agricultural Domain [29.395926321984565]
We present an annotated dataset and benchmarks for the semantic interpretation of real agricultural fields.
Our dataset recorded with a UAV provides high-quality, pixel-wise annotations of crops and weeds, but also crop leaf instances at the same time.
We provide benchmarks for various tasks on a hidden test set comprised of different fields.
arXiv Detail & Related papers (2023-06-07T16:04:08Z) - A Survey on Segment Anything Model (SAM): Vision Foundation Model Meets Prompt Engineering [49.732628643634975]
The Segment Anything Model (SAM), developed by Meta AI Research, offers a robust framework for image and video segmentation.
This survey provides a comprehensive exploration of the SAM family, including SAM and SAM 2, highlighting their advancements in granularity and contextual understanding.
arXiv Detail & Related papers (2023-05-12T07:21:59Z) - Extended Agriculture-Vision: An Extension of a Large Aerial Image
Dataset for Agricultural Pattern Analysis [11.133807938044804]
We release an improved version of the Agriculture-Vision dataset (Chiu et al., 2020b)
We extend this dataset with the release of 3600 large, high-resolution (10cm/pixel), full-field, red-green-blue and near-infrared images for pre-training.
We demonstrate the usefulness of this data by benchmarking different contrastive learning approaches on both downstream classification and semantic segmentation tasks.
arXiv Detail & Related papers (2023-03-04T17:35:24Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - The 1st Agriculture-Vision Challenge: Methods and Results [144.57794061346974]
The first Agriculture-Vision Challenge aims to encourage research in developing novel and effective algorithms for agricultural pattern recognition from aerial images.
Around 57 participating teams from various countries compete to achieve state-of-the-art in aerial agriculture semantic segmentation.
This paper provides a summary of notable methods and results in the challenge.
arXiv Detail & Related papers (2020-04-21T05:02:31Z) - Agriculture-Vision: A Large Aerial Image Database for Agricultural
Pattern Analysis [110.30849704592592]
We present Agriculture-Vision: a large-scale aerial farmland image dataset for semantic segmentation of agricultural patterns.
Each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel.
We annotate nine types of field anomaly patterns that are most important to farmers.
arXiv Detail & Related papers (2020-01-05T20:19:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.