Eyes on the Grass: Biodiversity-Increasing Robotic Mowing Using Deep Visual Embeddings
- URL: http://arxiv.org/abs/2512.15993v1
- Date: Wed, 17 Dec 2025 21:55:50 GMT
- Title: Eyes on the Grass: Biodiversity-Increasing Robotic Mowing Using Deep Visual Embeddings
- Authors: Lars Beckers, Arno Waes, Aaron Van Campenhout, Toon Goedemé,
- Abstract summary: This paper presents a robotic mowing framework that actively enhances garden biodiversity through visual perception and adaptive decision-making.<n>A ResNet50 network pretrained on PlantNet300K provides ecologically meaningful embeddings.<n>Results demonstrate a strong correlation between embedding-space dispersion and expert biodiversity assessment.
- Score: 4.264842065153011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a robotic mowing framework that actively enhances garden biodiversity through visual perception and adaptive decision-making. Unlike passive rewilding approaches, the proposed system uses deep feature-space analysis to identify and preserve visually diverse vegetation patches in camera images by selectively deactivating the mower blades. A ResNet50 network pretrained on PlantNet300K provides ecologically meaningful embeddings, from which a global deviation metric estimates biodiversity without species-level supervision. These estimates drive a selective mowing algorithm that dynamically alternates between mowing and conservation behavior. The system was implemented on a modified commercial robotic mower and validated both in a controlled mock-up lawn and on real garden datasets. Results demonstrate a strong correlation between embedding-space dispersion and expert biodiversity assessment, confirming the feasibility of deep visual diversity as a proxy for ecological richness and the effectiveness of the proposed mowing decision approach. Widespread adoption of such systems will turn ecologically worthless, monocultural lawns into vibrant, valuable biotopes that boost urban biodiversity.
Related papers
- A continental-scale dataset of ground beetles with high-resolution images and validated morphological trait measurements [13.860603856120795]
Ground beetles serve as critical bioindicators of ecosystem health.<n>National Ecological Observatory Network (NEON) maintains an extensive collection of carabid specimens from across the U.S.<n>We present a dataset digitizing over 13,200 NEON carabids from 30 sites spanning the continental US and Hawaii through high-resolution imaging.<n>The dataset includes digitally measured elytra length and width of each specimen, establishing a foundation for automated trait extraction using AI.
arXiv Detail & Related papers (2026-01-14T18:44:54Z) - Decentralized Vision-Based Autonomous Aerial Wildlife Monitoring [55.159556673975544]
We propose a decentralized vision-based multi-quadrotor system for wildlife monitoring.<n>Our approach enables robust identification and tracking of large species in their natural habitat.
arXiv Detail & Related papers (2025-08-20T20:05:05Z) - Deep Learning for Automated Identification of Vietnamese Timber Species: A Tool for Ecological Monitoring and Conservation [2.1466764570532004]
In this study, we explore the application of deep learning to automate the classification of ten wood species commonly found in Vietnam.<n>A custom image dataset was constructed from field-collected wood samples, and five state-of-the-art convolutional neural network architectures were evaluated.<n> ShuffleNetV2 achieved the best balance between classification performance and computational efficiency, with an average accuracy of 99.29% and F1-score of 99.35% over 20 independent runs.
arXiv Detail & Related papers (2025-08-13T02:54:58Z) - BioAnalyst: A Foundation Model for Biodiversity [0.565395466029518]
We introduce BioAnalyst, the first Foundation Model tailored for biodiversity analysis and conservation planning.<n>BioAnalyst employs a transformer-based architecture, pretrained on extensive multi-modal datasets.<n>We evaluate the model's performance on two downstream use cases, demonstrating its generalisability compared to existing methods.
arXiv Detail & Related papers (2025-07-11T23:56:08Z) - Habitat Classification from Ground-Level Imagery Using Deep Neural Networks [1.3408365072149797]
This study applies state-of-the-art deep neural network architectures to ground-level habitat imagery.<n>We evaluate two families of models -- convolutional neural networks (CNNs) and vision transformers (ViTs)<n>Our results demonstrate that ViTs consistently outperform state-of-the-art CNN baselines on key classification metrics.
arXiv Detail & Related papers (2025-07-05T12:07:13Z) - BioCLIP 2: Emergent Properties from Scaling Hierarchical Contrastive Learning [60.80381372245902]
We find emergent behaviors in biological vision models via large-scale contrastive vision-language training.<n>We train BioCLIP 2 on TreeOfLife-200M to distinguish different species.<n>We identify emergent properties in the learned embedding space of BioCLIP 2.
arXiv Detail & Related papers (2025-05-29T17:48:20Z) - Dual-Task Learning for Dead Tree Detection and Segmentation with Hybrid Self-Attention U-Nets in Aerial Imagery [1.693687279684153]
This study introduces a hybrid postprocessing framework that refines deep learning-based tree segmentation.<n>Tested on high-resolution aerial imagery from boreal forests, the framework improved instance-level segmentation accuracy by 41.5%.<n>The framework's computational efficiency supports scalable applications, such as wall-to-wall tree mortality mapping.
arXiv Detail & Related papers (2025-03-27T12:25:20Z) - Towards Context-Rich Automated Biodiversity Assessments: Deriving AI-Powered Insights from Camera Trap Data [0.06819010383838325]
Camera traps offer enormous new opportunities in ecological studies.
Current automated image analysis methods often lack contextual richness needed to support impactful conservation outcomes.
Here we present an integrated approach that combines deep learning-based vision and language models to improve ecological reporting using data from camera traps.
arXiv Detail & Related papers (2024-11-21T15:28:52Z) - Semantic Image Segmentation with Deep Learning for Vine Leaf Phenotyping [59.0626764544669]
In this study, we use Deep Learning methods to semantically segment grapevine leaves images in order to develop an automated object detection system for leaf phenotyping.
Our work contributes to plant lifecycle monitoring through which dynamic traits such as growth and development can be captured and quantified.
arXiv Detail & Related papers (2022-10-24T14:37:09Z) - Neuroevolution-based Classifiers for Deforestation Detection in Tropical
Forests [62.997667081978825]
Millions of hectares of tropical forests are lost every year due to deforestation or degradation.
Monitoring and deforestation detection programs are in use, in addition to public policies for the prevention and punishment of criminals.
This paper proposes the use of pattern classifiers based on neuroevolution technique (NEAT) in tropical forest deforestation detection tasks.
arXiv Detail & Related papers (2022-08-23T16:04:12Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.