Usefulness of interpretability methods to explain deep learning based
plant stress phenotyping
- URL: http://arxiv.org/abs/2007.05729v1
- Date: Sat, 11 Jul 2020 09:28:50 GMT
- Title: Usefulness of interpretability methods to explain deep learning based
plant stress phenotyping
- Authors: Koushik Nagasubramanian, Asheesh K. Singh, Arti Singh, Soumik Sarkar,
Baskar Ganapathysubramanian
- Abstract summary: We train a DenseNet-121 network for the classification of eight different soybean stresses (biotic and abiotic)
For a diverse subset of the test data, we compared the important features with those identified by a human expert.
Most interpretability methods identify the infected regions of the leaf as important features for some -- but not all -- of the correctly classified images.
- Score: 8.786924604224101
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning techniques have been successfully deployed for automating plant
stress identification and quantification. In recent years, there is a growing
push towards training models that are interpretable -i.e. that justify their
classification decisions by visually highlighting image features that were
crucial for classification decisions. The expectation is that trained network
models utilize image features that mimic visual cues used by plant
pathologists. In this work, we compare some of the most popular
interpretability methods: Saliency Maps, SmoothGrad, Guided Backpropogation,
Deep Taylor Decomposition, Integrated Gradients, Layer-wise Relevance
Propagation and Gradient times Input, for interpreting the deep learning model.
We train a DenseNet-121 network for the classification of eight different
soybean stresses (biotic and abiotic). Using a dataset consisting of 16,573 RGB
images of healthy and stressed soybean leaflets captured under controlled
conditions, we obtained an overall classification accuracy of 95.05 \%. For a
diverse subset of the test data, we compared the important features with those
identified by a human expert. We observed that most interpretability methods
identify the infected regions of the leaf as important features for some -- but
not all -- of the correctly classified images. For some images, the output of
the interpretability methods indicated that spurious feature correlations may
have been used to correctly classify them. Although the output explanation maps
of these interpretability methods may be different from each other for a given
image, we advocate the use of these interpretability methods as `hypothesis
generation' mechanisms that can drive scientific insight.
Related papers
- Training Class-Imbalanced Diffusion Model Via Overlap Optimization [55.96820607533968]
Diffusion models trained on real-world datasets often yield inferior fidelity for tail classes.
Deep generative models, including diffusion models, are biased towards classes with abundant training images.
We propose a method based on contrastive learning to minimize the overlap between distributions of synthetic images for different classes.
arXiv Detail & Related papers (2024-02-16T16:47:21Z) - EvalAttAI: A Holistic Approach to Evaluating Attribution Maps in Robust
and Non-Robust Models [0.3425341633647624]
This paper focuses on evaluating methods of attribution mapping to find whether robust neural networks are more explainable.
We propose a new explainability faithfulness metric (called EvalAttAI) that addresses the limitations of prior metrics.
arXiv Detail & Related papers (2023-03-15T18:33:22Z) - A Test Statistic Estimation-based Approach for Establishing
Self-interpretable CNN-based Binary Classifiers [7.424003880270276]
Post-hoc interpretability methods have the limitation that they can produce plausible but different interpretations.
The proposed method is self-interpretable, quantitative. Unlike the traditional post-hoc interpretability methods, the proposed method is self-interpretable, quantitative.
arXiv Detail & Related papers (2023-03-13T05:51:35Z) - Few-Shot Learning Enables Population-Scale Analysis of Leaf Traits in
Populus trichocarpa [1.9089478605920305]
This work is designed to provide the plant phenotyping community with (i) methods for fast and accurate image-based feature extraction that require minimal training data, and (ii) a new population-scale data set, including 68 different leaf phenotypes, for domain scientists and machine learning researchers.
All of the few-shot learning code, data, and results are made publicly available.
arXiv Detail & Related papers (2023-01-24T23:40:01Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - CAMERAS: Enhanced Resolution And Sanity preserving Class Activation
Mapping for image saliency [61.40511574314069]
Backpropagation image saliency aims at explaining model predictions by estimating model-centric importance of individual pixels in the input.
We propose CAMERAS, a technique to compute high-fidelity backpropagation saliency maps without requiring any external priors.
arXiv Detail & Related papers (2021-06-20T08:20:56Z) - An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human
Pose Estimation [80.02124918255059]
Semi-supervised learning aims to boost the accuracy of a model by exploring unlabeled images.
We learn two networks to mutually teach each other.
The more reliable predictions on easy images in each network are used to teach the other network to learn about the corresponding hard images.
arXiv Detail & Related papers (2020-11-25T03:29:52Z) - Graph Neural Networks for UnsupervisedDomain Adaptation of
Histopathological ImageAnalytics [22.04114134677181]
We present a novel method for the unsupervised domain adaptation for histological image analysis.
It is based on a backbone for embedding images into a feature space, and a graph neural layer for propa-gating the supervision signals of images with labels.
In experiments, our methodachieves state-of-the-art performance on four public datasets.
arXiv Detail & Related papers (2020-08-21T04:53:44Z) - ICAM: Interpretable Classification via Disentangled Representations and
Feature Attribution Mapping [3.262230127283453]
We present a novel framework for creating class specific FA maps through image-to-image translation.
We validate our method on 2D and 3D brain image datasets of dementia, ageing, and (simulated) lesion detection.
Our approach is the first to use latent space sampling to support exploration of phenotype variation.
arXiv Detail & Related papers (2020-06-15T11:23:30Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z) - Embedding Propagation: Smoother Manifold for Few-Shot Classification [131.81692677836202]
We propose to use embedding propagation as an unsupervised non-parametric regularizer for manifold smoothing in few-shot classification.
We empirically show that embedding propagation yields a smoother embedding manifold.
We show that embedding propagation consistently improves the accuracy of the models in multiple semi-supervised learning scenarios by up to 16% points.
arXiv Detail & Related papers (2020-03-09T13:51:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.