AgroSense: An Integrated Deep Learning System for Crop Recommendation via Soil Image Analysis and Nutrient Profiling
- URL: http://arxiv.org/abs/2509.01344v1
- Date: Mon, 01 Sep 2025 10:25:05 GMT
- Title: AgroSense: An Integrated Deep Learning System for Crop Recommendation via Soil Image Analysis and Nutrient Profiling
- Authors: Vishal Pandey, Ranjita Das, Debasmita Biswas,
- Abstract summary: AgroSense is a deep-learning framework that integrates soil image classification and nutrient profiling to produce accurate and contextually relevant crop recommendations.<n>A multimodal dataset of 10,000 paired samples drawn from publicly available Kaggle repositories was curated.<n>A fused model achieves 98.0% accuracy, with a precision of 97.8%, a recall of 97.7%, and an F1-score of 96.75%.
- Score: 0.9974630621313314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meeting the increasing global demand for food security and sustainable farming requires intelligent crop recommendation systems that operate in real time. Traditional soil analysis techniques are often slow, labor-intensive, and not suitable for on-field decision-making. To address these limitations, we introduce AgroSense, a deep-learning framework that integrates soil image classification and nutrient profiling to produce accurate and contextually relevant crop recommendations. AgroSense comprises two main components: a Soil Classification Module, which leverages ResNet-18, EfficientNet-B0, and Vision Transformer architectures to categorize soil types from images; and a Crop Recommendation Module, which employs a Multi-Layer Perceptron, XGBoost, LightGBM, and TabNet to analyze structured soil data, including nutrient levels, pH, and rainfall. We curated a multimodal dataset of 10,000 paired samples drawn from publicly available Kaggle repositories, approximately 50,000 soil images across seven classes, and 25,000 nutrient profiles for experimental evaluation. The fused model achieves 98.0% accuracy, with a precision of 97.8%, a recall of 97.7%, and an F1-score of 96.75%, while RMSE and MAE drop to 0.32 and 0.27, respectively. Ablation studies underscore the critical role of multimodal coupling, and statistical validation via t-tests and ANOVA confirms the significance of our improvements. AgroSense offers a practical, scalable solution for real-time decision support in precision agriculture and paves the way for future lightweight multimodal AI systems in resource-constrained environments.
Related papers
- Seeing Soil from Space: Towards Robust and Scalable Remote Soil Nutrient Analysis [0.0]
This study presents a robust and scalable modeling system for estimating soil properties in croplands.<n>The system employs a hybrid modeling approach, combining the indirect methods of modeling soil through proxies and drivers with direct spectral modeling.<n>We validate the system on a harmonized dataset that covers Europes cropland soils across diverse pedoclimatic zones.
arXiv Detail & Related papers (2025-12-10T12:08:55Z) - Self-Consistency in Vision-Language Models for Precision Agriculture: Multi-Response Consensus for Crop Disease Management [0.0]
This work presents a domain-aware framework for agricultural image processing that combines prompt-based expert evaluation with self-consistency mechanisms.<n>We introduce two key innovations: (1) a prompt-based evaluation protocol that configures a language model as an expert plant pathologist for scalable assessment of image analysis outputs, and (2) a cosine-consistency self-voting mechanism that generates multiple candidate responses from agricultural images.<n>Our approach improves diagnostic accuracy from 82.2% to 87.8%, symptom analysis from 38.9% to 52.2%, and treatment recommendation from 27.8% to 43.3
arXiv Detail & Related papers (2025-07-08T18:32:21Z) - Machine Learning Models for Soil Parameter Prediction Based on Satellite, Weather, Clay and Yield Data [1.546169961420396]
The AgroLens project endeavors to develop Machine Learning-based methodologies to predict soil nutrient levels without reliance on laboratory tests.<n>The approach begins with the development of a robust European model using the LUCAS Soil dataset and Sentinel-2 satellite imagery.<n>Advanced algorithms, including Random Forests, Extreme Gradient Boosting (XGBoost), and Fully Connected Neural Networks (FCNN), were implemented and finetuned for precise nutrient prediction.
arXiv Detail & Related papers (2025-03-28T09:44:32Z) - Design and Implementation of FourCropNet: A CNN-Based System for Efficient Multi-Crop Disease Detection and Management [3.4161054453684705]
This study proposes FourCropNet, a novel deep learning model designed to detect diseases in multiple crops.<n>FourCropNet achieved the highest accuracy of 99.7% for Grape, 99.5% for Corn, and 95.3% for the combined dataset.
arXiv Detail & Related papers (2025-03-11T12:00:56Z) - Prediction of soil fertility parameters using USB-microscope imagery and portable X-ray fluorescence spectrometry [3.431158134976364]
This study investigated the use of portable X-ray fluorescence (PXRF) spectrometry and soil image analysis for rapid soil fertility assessment.
A total of 1,133 soil samples from diverse agro-climatic zones in Eastern India were analyzed.
arXiv Detail & Related papers (2024-04-17T17:57:20Z) - Generating Diverse Agricultural Data for Vision-Based Farming Applications [74.79409721178489]
This model is capable of simulating distinct growth stages of plants, diverse soil conditions, and randomized field arrangements under varying lighting conditions.
Our dataset includes 12,000 images with semantic labels, offering a comprehensive resource for computer vision tasks in precision agriculture.
arXiv Detail & Related papers (2024-03-27T08:42:47Z) - AGSPNet: A framework for parcel-scale crop fine-grained semantic change
detection from UAV high-resolution imagery with agricultural geographic scene
constraints [1.9151076515142376]
This paper proposes an agricultural geographic scene and parcel-scale constrained SCD framework for crops (AGSPNet)
We produce and introduce an UAV image SCD dataset (CSCD) dedicated to agricultural monitoring, encompassing multiple semantic variation types of crops in complex geographical scene.
The results show that the crop SCD results of AGSPNet consistently outperform other deep learning SCD models in terms of quantity and quality.
arXiv Detail & Related papers (2024-01-11T20:47:28Z) - HarvestNet: A Dataset for Detecting Smallholder Farming Activity Using
Harvest Piles and Remote Sensing [50.4506590177605]
HarvestNet is a dataset for mapping the presence of farms in the Ethiopian regions of Tigray and Amhara during 2020-2023.
We introduce a new approach based on the detection of harvest piles characteristic of many smallholder systems.
We conclude that remote sensing of harvest piles can contribute to more timely and accurate cropland assessments in food insecure regions.
arXiv Detail & Related papers (2023-08-23T11:03:28Z) - Agave crop segmentation and maturity classification with deep learning
data-centric strategies using very high-resolution satellite imagery [101.18253437732933]
We present an Agave tequilana Weber azul crop segmentation and maturity classification using very high resolution satellite imagery.
We solve real-world deep learning problems in the very specific context of agave crop segmentation.
With the resulting accurate models, agave production forecasting can be made available for large regions.
arXiv Detail & Related papers (2023-03-21T03:15:29Z) - Jalisco's multiclass land cover analysis and classification using a
novel lightweight convnet with real-world multispectral and relief data [51.715517570634994]
We present our novel lightweight (only 89k parameters) Convolution Neural Network (ConvNet) to make LC classification and analysis.
In this work, we combine three real-world open data sources to obtain 13 channels.
Our embedded analysis anticipates the limited performance in some classes and gives us the opportunity to group the most similar.
arXiv Detail & Related papers (2022-01-26T14:58:51Z) - Potato Crop Stress Identification in Aerial Images using Deep
Learning-based Object Detection [60.83360138070649]
The paper presents an approach for analyzing aerial images of a potato crop using deep neural networks.
The main objective is to demonstrate automated spatial recognition of a healthy versus stressed crop at a plant level.
Experimental validation demonstrated the ability for distinguishing healthy and stressed plants in field images, achieving an average Dice coefficient of 0.74.
arXiv Detail & Related papers (2021-06-14T21:57:40Z) - A CNN Approach to Simultaneously Count Plants and Detect Plantation-Rows
from UAV Imagery [56.10033255997329]
We propose a novel deep learning method based on a Convolutional Neural Network (CNN)
It simultaneously detects and geolocates plantation-rows while counting its plants considering highly-dense plantation configurations.
The proposed method achieved state-of-the-art performance for counting and geolocating plants and plant-rows in UAV images from different types of crops.
arXiv Detail & Related papers (2020-12-31T18:51:17Z) - Estimating Crop Primary Productivity with Sentinel-2 and Landsat 8 using
Machine Learning Methods Trained with Radiative Transfer Simulations [58.17039841385472]
We take advantage of all parallel developments in mechanistic modeling and satellite data availability for advanced monitoring of crop productivity.
Our model successfully estimates gross primary productivity across a variety of C3 crop types and environmental conditions even though it does not use any local information from the corresponding sites.
This highlights its potential to map crop productivity from new satellite sensors at a global scale with the help of current Earth observation cloud computing platforms.
arXiv Detail & Related papers (2020-12-07T16:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.