Vision Transformers in Precision Agriculture: A Comprehensive Survey
- URL: http://arxiv.org/abs/2504.21706v1
- Date: Wed, 30 Apr 2025 14:50:02 GMT
- Title: Vision Transformers in Precision Agriculture: A Comprehensive Survey
- Authors: Saber Mehdipour, Seyed Abolghasem Mirroshandel, Seyed Amirhossein Tabatabaei,
- Abstract summary: Vision Transformers (ViTs) offer benefits such as improved handling of long-range dependencies and better scalability for visual tasks.<n>This survey explores the application of ViTs in precision agriculture, covering tasks from classification to detection and segmentation.
- Score: 3.156133122658662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting plant diseases is a crucial aspect of modern agriculture - it plays a key role in maintaining crop health and increasing overall yield. Traditional approaches, though still valuable, often rely on manual inspection or conventional machine learning techniques, both of which face limitations in scalability and accuracy. Recently, Vision Transformers (ViTs) have emerged as a promising alternative, offering benefits such as improved handling of long-range dependencies and better scalability for visual tasks. This survey explores the application of ViTs in precision agriculture, covering tasks from classification to detection and segmentation. We begin by introducing the foundational architecture of ViTs and discuss their transition from Natural Language Processing (NLP) to computer vision. The discussion includes the concept of inductive bias in traditional models like Convolutional Neural Networks (CNNs), and how ViTs mitigate these biases. We provide a comprehensive review of recent literature, focusing on key methodologies, datasets, and performance metrics. The survey also includes a comparative analysis of CNNs and ViTs, with a look at hybrid models and performance enhancements. Technical challenges - such as data requirements, computational demands, and model interpretability - are addressed alongside potential solutions. Finally, we outline potential research directions and technological advancements that could further support the integration of ViTs in real-world agricultural settings. Our goal with this study is to offer practitioners and researchers a deeper understanding of how ViTs are poised to transform smart and precision agriculture.
Related papers
- Retrieval Augmented Generation and Understanding in Vision: A Survey and New Outlook [85.43403500874889]
Retrieval-augmented generation (RAG) has emerged as a pivotal technique in artificial intelligence (AI)<n>Recent advancements in RAG for embodied AI, with a particular focus on applications in planning, task execution, multimodal perception, interaction, and specialized domains.
arXiv Detail & Related papers (2025-03-23T10:33:28Z) - Edge-AI for Agriculture: Lightweight Vision Models for Disease Detection in Resource-Limited Settings [0.0]
The proposed system integrates advanced object detection, classification, and segmentation models, optimized for deployment on edge devices.<n>The study evaluates the performance of various state-of-the-art models, focusing on their accuracy, computational efficiency, and generalization capabilities.
arXiv Detail & Related papers (2024-12-23T06:48:50Z) - ViTmiX: Vision Transformer Explainability Augmented by Mixed Visualization Methods [1.1650821883155187]
We present a hybrid approach that mixes multiple explainability techniques to enhance interpretability of ViT models.
Our experiments reveal that this hybrid approach significantly improves the interpretability of ViT models compared to individual methods.
To quantify the explainability gain, we introduced a novel post-hoc explainability measure by applying the Pigeonhole principle.
arXiv Detail & Related papers (2024-12-18T18:18:19Z) - A Review of Transformer-Based Models for Computer Vision Tasks: Capturing Global Context and Spatial Relationships [0.5639904484784127]
Transformer-based models have transformed the landscape of natural language processing (NLP)
These models are renowned for their ability to capture long-range dependencies and contextual information.
We discuss potential research directions and applications of transformer-based models in computer vision.
arXiv Detail & Related papers (2024-08-27T16:22:18Z) - Do Vision-Language Transformers Exhibit Visual Commonsense? An Empirical Study of VCR [51.72751335574947]
Visual Commonsense Reasoning (VCR) calls for explanatory reasoning behind question answering over visual scenes.
Progress on the benchmark dataset stems largely from the recent advancement of Vision-Language Transformers (VL Transformers)
This paper posits that the VL Transformers do not exhibit visual commonsense, which is the key to VCR.
arXiv Detail & Related papers (2024-05-27T08:26:58Z) - Explainable AI in Grassland Monitoring: Enhancing Model Performance and
Domain Adaptability [0.6131022957085438]
Grasslands are known for their high biodiversity and ability to provide multiple ecosystem services.
Challenges in automating the identification of indicator plants are key obstacles to large-scale grassland monitoring.
This paper delves into the latter two challenges, with a specific focus on transfer learning and XAI approaches to grassland monitoring.
arXiv Detail & Related papers (2023-12-13T10:17:48Z) - Generalized Face Forgery Detection via Adaptive Learning for Pre-trained Vision Transformer [54.32283739486781]
We present a textbfForgery-aware textbfAdaptive textbfVision textbfTransformer (FA-ViT) under the adaptive learning paradigm.
FA-ViT achieves 93.83% and 78.32% AUC scores on Celeb-DF and DFDC datasets in the cross-dataset evaluation.
arXiv Detail & Related papers (2023-09-20T06:51:11Z) - A Comprehensive Survey on Applications of Transformers for Deep Learning
Tasks [60.38369406877899]
Transformer is a deep neural network that employs a self-attention mechanism to comprehend the contextual relationships within sequential data.
transformer models excel in handling long dependencies between input sequence elements and enable parallel processing.
Our survey encompasses the identification of the top five application domains for transformer-based models.
arXiv Detail & Related papers (2023-06-11T23:13:51Z) - Generative Adversarial Networks for Image Augmentation in Agriculture: A
Systematic Review [5.639656362091594]
generative adversarial network (GAN) invented in 2014 in the computer vision community, provides suite of novel approaches that can learn good data representations.
This paper presents an overview of the evolution of GAN architectures followed by a systematic review of their application to agriculture.
arXiv Detail & Related papers (2022-04-10T15:33:05Z) - Visualizing and Understanding Patch Interactions in Vision Transformer [96.70401478061076]
Vision Transformer (ViT) has become a leading tool in various computer vision tasks.
We propose a novel explainable visualization approach to analyze and interpret the crucial attention interactions among patches for vision transformer.
arXiv Detail & Related papers (2022-03-11T13:48:11Z) - TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation [54.61786380919243]
Unsupervised domain adaptation (UDA) aims to transfer the knowledge learnt from a labeled source domain to an unlabeled target domain.
Previous work is mainly built upon convolutional neural networks (CNNs) to learn domain-invariant representations.
With the recent exponential increase in applying Vision Transformer (ViT) to vision tasks, the capability of ViT in adapting cross-domain knowledge remains unexplored in the literature.
arXiv Detail & Related papers (2021-08-12T22:37:43Z) - Transformers in Vision: A Survey [101.07348618962111]
Transformers enable modeling long dependencies between input sequence elements and support parallel processing of sequence.
Transformers require minimal inductive biases for their design and are naturally suited as set-functions.
This survey aims to provide a comprehensive overview of the Transformer models in the computer vision discipline.
arXiv Detail & Related papers (2021-01-04T18:57:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.