Improving FHB Screening in Wheat Breeding Using an Efficient Transformer
Model
- URL: http://arxiv.org/abs/2308.03670v1
- Date: Mon, 7 Aug 2023 15:44:58 GMT
- Title: Improving FHB Screening in Wheat Breeding Using an Efficient Transformer
Model
- Authors: Babak Azad, Ahmed Abdalla, Kwanghee Won, Ali Mirzakhani Nafchi
- Abstract summary: Fusarium head blight is a devastating disease that causes significant economic losses annually on small grains.
Image processing techniques have been developed using supervised machine learning algorithms for the early detection of FHB.
A new Context Bridge is proposed to integrate the local representation capability of the U-Net network in the transformer model.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Fusarium head blight is a devastating disease that causes significant
economic losses annually on small grains. Efficiency, accuracy, and timely
detection of FHB in the resistance screening are critical for wheat and barley
breeding programs. In recent years, various image processing techniques have
been developed using supervised machine learning algorithms for the early
detection of FHB. The state-of-the-art convolutional neural network-based
methods, such as U-Net, employ a series of encoding blocks to create a local
representation and a series of decoding blocks to capture the semantic
relations. However, these methods are not often capable of long-range modeling
dependencies inside the input data, and their ability to model multi-scale
objects with significant variations in texture and shape is limited. Vision
transformers as alternative architectures with innate global self-attention
mechanisms for sequence-to-sequence prediction, due to insufficient low-level
details, may also limit localization capabilities. To overcome these
limitations, a new Context Bridge is proposed to integrate the local
representation capability of the U-Net network in the transformer model. In
addition, the standard attention mechanism of the original transformer is
replaced with Efficient Self-attention, which is less complicated than other
state-of-the-art methods. To train the proposed network, 12,000 wheat images
from an FHB-inoculated wheat field at the SDSU research farm in Volga, SD, were
captured. In addition to healthy and unhealthy plants, these images encompass
various stages of the disease. A team of expert pathologists annotated the
images for training and evaluating the developed model. As a result, the
effectiveness of the transformer-based method for FHB-disease detection,
through extensive experiments across typical tasks for plant image
segmentation, is demonstrated.
Related papers
- Small data deep learning methodology for in-field disease detection [6.2747249113031325]
We present the first machine learning model capable of detecting mild symptoms of late blight in potato crops.
Our proposal exploits the availability of high-resolution images via the concept of patching, and is based on deep convolutional neural networks with a focal loss function.
Our model correctly detects all cases of late blight in the test dataset, demonstrating a high level of accuracy and effectiveness in identifying early symptoms.
arXiv Detail & Related papers (2024-09-25T17:31:17Z) - Adapting Visual-Language Models for Generalizable Anomaly Detection in Medical Images [68.42215385041114]
This paper introduces a novel lightweight multi-level adaptation and comparison framework to repurpose the CLIP model for medical anomaly detection.
Our approach integrates multiple residual adapters into the pre-trained visual encoder, enabling a stepwise enhancement of visual features across different levels.
Our experiments on medical anomaly detection benchmarks demonstrate that our method significantly surpasses current state-of-the-art models.
arXiv Detail & Related papers (2024-03-19T09:28:19Z) - ViTaL: An Advanced Framework for Automated Plant Disease Identification
in Leaf Images Using Vision Transformers and Linear Projection For Feature
Reduction [0.0]
This paper introduces a robust framework for the automated identification of diseases in plant leaf images.
The framework incorporates several key stages to enhance disease recognition accuracy.
We propose a novel hardware design specifically tailored for scanning diseased leaves in an omnidirectional fashion.
arXiv Detail & Related papers (2024-02-27T11:32:37Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - SugarViT -- Multi-objective Regression of UAV Images with Vision
Transformers and Deep Label Distribution Learning Demonstrated on Disease
Severity Prediction in Sugar Beet [3.2925222641796554]
This work will introduce a machine learning framework for automatized large-scale plant-specific trait annotation.
We develop an efficient Vision Transformer based model for disease severity scoring called SugarViT.
Although the model is evaluated on this special use case, it is held as generic as possible to also be applicable to various image-based classification and regression tasks.
arXiv Detail & Related papers (2023-11-06T13:01:17Z) - Affine-Consistent Transformer for Multi-Class Cell Nuclei Detection [76.11864242047074]
We propose a novel Affine-Consistent Transformer (AC-Former), which directly yields a sequence of nucleus positions.
We introduce an Adaptive Affine Transformer (AAT) module, which can automatically learn the key spatial transformations to warp original images for local network training.
Experimental results demonstrate that the proposed method significantly outperforms existing state-of-the-art algorithms on various benchmarks.
arXiv Detail & Related papers (2023-10-22T02:27:02Z) - Disruptive Autoencoders: Leveraging Low-level features for 3D Medical
Image Pre-training [51.16994853817024]
This work focuses on designing an effective pre-training framework for 3D radiology images.
We introduce Disruptive Autoencoders, a pre-training framework that attempts to reconstruct the original image from disruptions created by a combination of local masking and low-level perturbations.
The proposed pre-training framework is tested across multiple downstream tasks and achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-07-31T17:59:42Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - Fast Unsupervised Brain Anomaly Detection and Segmentation with
Diffusion Models [1.6352599467675781]
We propose a method based on diffusion models to detect and segment anomalies in brain imaging.
Our diffusion models achieve competitive performance compared with autoregressive approaches across a series of experiments with 2D CT and MRI data.
arXiv Detail & Related papers (2022-06-07T17:30:43Z) - Coarse-to-Fine Sparse Transformer for Hyperspectral Image Reconstruction [138.04956118993934]
We propose a novel Transformer-based method, coarse-to-fine sparse Transformer (CST)
CST embedding HSI sparsity into deep learning for HSI reconstruction.
In particular, CST uses our proposed spectra-aware screening mechanism (SASM) for coarse patch selecting. Then the selected patches are fed into our customized spectra-aggregation hashing multi-head self-attention (SAH-MSA) for fine pixel clustering and self-similarity capturing.
arXiv Detail & Related papers (2022-03-09T16:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.