Analysis of Learned Features and Framework for Potato Disease Detection
- URL: http://arxiv.org/abs/2310.05943v1
- Date: Tue, 29 Aug 2023 07:05:56 GMT
- Title: Analysis of Learned Features and Framework for Potato Disease Detection
- Authors: Shikha Gupta, Soma Chakraborty, Renu Rameshan
- Abstract summary: We handle the dataset shift by ensuring that the features are learned from disease spots in the leaf or healthy regions.
This is achieved using a faster Region-based convolutional neural network (RCNN) as one of the solutions and an attention-based network as the other.
- Score: 3.9134031118910264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For applications like plant disease detection, usually, a model is trained on
publicly available data and tested on field data. This means that the test data
distribution is not the same as the training data distribution, which affects
the classifier performance adversely. We handle this dataset shift by ensuring
that the features are learned from disease spots in the leaf or healthy
regions, as applicable. This is achieved using a faster Region-based
convolutional neural network (RCNN) as one of the solutions and an
attention-based network as the other. The average classification accuracies of
these classifiers are approximately 95% while evaluated on the test set
corresponding to their training dataset. These classifiers also performed
equivalently, with an average score of 84% on a dataset not seen during the
training phase.
Related papers
- DExNet: Combining Observations of Domain Adapted Critics for Leaf Disease Classification with Limited Data [1.124958340749622]
This work proposes a few-shot learning framework, Domain-adapted Expert Network (DExNet), for plant disease classification.<n>It starts with extracting the feature embeddings as 'observations' from nine 'critics' that are state-of-the-art pre-trained CNN-based architectures.<n>The proposed pipeline is evaluated on the 10 classes of tomato leaf images from the PlantVillage dataset.
arXiv Detail & Related papers (2025-06-22T21:15:54Z) - Z-Error Loss for Training Neural Networks [0.0]
Outliers introduce significant training challenges in neural networks by propagating erroneous gradients, which can degrade model performance and generalization.<n>We propose the Z-Error Loss, a statistically principled approach that minimizes outlier influence during training by masking the contribution of data points identified as out-of-distribution within each batch.
arXiv Detail & Related papers (2025-06-02T18:35:30Z) - Unlearnable Examples Detection via Iterative Filtering [84.59070204221366]
Deep neural networks are proven to be vulnerable to data poisoning attacks.
It is quite beneficial and challenging to detect poisoned samples from a mixed dataset.
We propose an Iterative Filtering approach for UEs identification.
arXiv Detail & Related papers (2024-08-15T13:26:13Z) - Refining Tuberculosis Detection in CXR Imaging: Addressing Bias in Deep Neural Networks via Interpretability [1.9936075659851882]
We argue that the reliability of deep learning models is limited, even if they can be shown to obtain perfect classification accuracy on the test data.
We show that pre-training a deep neural network on a large-scale proxy task, as well as using mixed objective optimization network (MOON), can improve the alignment of decision foundations between models and experts.
arXiv Detail & Related papers (2024-07-19T06:41:31Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Data-SUITE: Data-centric identification of in-distribution incongruous
examples [81.21462458089142]
Data-SUITE is a data-centric framework to identify incongruous regions of in-distribution (ID) data.
We empirically validate Data-SUITE's performance and coverage guarantees.
arXiv Detail & Related papers (2022-02-17T18:58:31Z) - Automatically detecting data drift in machine learning classifiers [2.202253618096515]
We term changes that affect machine learning performance data drift' or drift'
We propose an approach based solely on classifier suggested labels and its confidence in them, for alerting on data distribution or feature space changes that are likely to cause data drift.
arXiv Detail & Related papers (2021-11-10T12:34:14Z) - Dataset Bias Mitigation Through Analysis of CNN Training Scores [0.0]
We propose a novel, domain-independent approach, called score-based resampling (SBR), to locate the under-represented samples of the original training dataset.
In our method, once trained, we use the same CNN model to infer on its own training samples, obtain prediction scores, and based on the distance between predicted and ground-truth, we identify samples that are far away from their ground-truth.
The obtained results confirmed the validity of our proposed method regrading identifying under-represented samples among original dataset to decrease categorical bias of classifying certain groups.
arXiv Detail & Related papers (2021-06-28T16:07:49Z) - Unsupervised neural adaptation model based on optimal transport for
spoken language identification [54.96267179988487]
Due to the mismatch of statistical distributions of acoustic speech between training and testing sets, the performance of spoken language identification (SLID) could be drastically degraded.
We propose an unsupervised neural adaptation model to deal with the distribution mismatch problem for SLID.
arXiv Detail & Related papers (2020-12-24T07:37:19Z) - Self-Challenging Improves Cross-Domain Generalization [81.99554996975372]
Convolutional Neural Networks (CNN) conduct image classification by activating dominant features that correlated with labels.
We introduce a simple training, Self-Challenging Representation (RSC), that significantly improves the generalization of CNN to the out-of-domain data.
RSC iteratively challenges the dominant features activated on the training data, and forces the network to activate remaining features that correlates with labels.
arXiv Detail & Related papers (2020-07-05T21:42:26Z) - Learning with Out-of-Distribution Data for Audio Classification [60.48251022280506]
We show that detecting and relabelling certain OOD instances, rather than discarding them, can have a positive effect on learning.
The proposed method is shown to improve the performance of convolutional neural networks by a significant margin.
arXiv Detail & Related papers (2020-02-11T21:08:06Z) - GIM: Gaussian Isolation Machines [40.7916016364212]
In many cases, neural network classifiers are exposed to input data that is outside of their training distribution data.
We present a novel hybrid (generative-discriminative) classifier aimed at solving the problem arising when OOD data is encountered.
The proposed GIM's novelty lies in its discriminative performance and generative capabilities, a combination of characteristics not usually seen in a single classifier.
arXiv Detail & Related papers (2020-02-06T09:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.