Deep learning, machine vision in agriculture in 2021
- URL: http://arxiv.org/abs/2103.04893v1
- Date: Wed, 3 Mar 2021 00:41:53 GMT
- Title: Deep learning, machine vision in agriculture in 2021
- Authors: Ildar Rakhmatulin
- Abstract summary: The manuscript presents the complete analysis of researches on the use of neural networks for the classification and tracking of weeds.
We present the recommendation for the use of neural networks in the tasks of recognizing a cultivated object and weeds.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Over the past decade, unprecedented progress in the development of neural
networks influenced dozens of different industries, including weed recognition
in the agro-industrial sector. The use of neural networks in agro-industrial
activity in the task of recognizing cultivated crops is a new direction. The
absence of any standards significantly complicates the understanding of the
real situation of the use of the neural network in the agricultural sector. The
manuscript presents the complete analysis of researches over the past 10 years
on the use of neural networks for the classification and tracking of weeds due
to neural networks. In particular, the analysis of the results of using various
neural network algorithms for the task of classification and tracking was
presented. As a result, we presented the recommendation for the use of neural
networks in the tasks of recognizing a cultivated object and weeds. Using this
standard can significantly improve the quality of research on this topic and
simplify the analysis and understanding of any paper.
Related papers
- Planarian Neural Networks: Evolutionary Patterns from Basic Bilateria Shaping Modern Artificial Neural Network Architectures [7.054776300100835]
The aim of this study is to improve the image classification performance of ANNs via a novel approach inspired by the biological nervous system architecture of planarians.
The proposed planarian neural architecture-based neural network was evaluated on the CIFAR-10 and CIFAR-100 datasets.
arXiv Detail & Related papers (2025-01-08T18:59:36Z) - Adapting the Biological SSVEP Response to Artificial Neural Networks [5.4712259563296755]
This paper introduces a novel approach to neuron significance assessment inspired by frequency tagging, a technique from neuroscience.
Experiments conducted with a convolutional neural network for image classification reveal notable harmonics and intermodulations in neuron-specific responses under part-based frequency tagging.
The proposed method holds promise for applications in network pruning, and model interpretability, contributing to the advancement of explainable artificial intelligence.
arXiv Detail & Related papers (2024-11-15T10:02:48Z) - Probing Biological and Artificial Neural Networks with Task-dependent
Neural Manifolds [12.037840490243603]
We investigate the internal mechanisms of neural networks through the lens of neural population geometry.
We quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models.
These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry.
arXiv Detail & Related papers (2023-12-21T20:40:51Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Graph Neural Operators for Classification of Spatial Transcriptomics
Data [1.408706290287121]
We propose a study incorporating various graph neural network approaches to validate the efficacy of applying neural operators towards prediction of brain regions in mouse brain tissue samples.
We were able to achieve an F1 score of nearly 72% for the graph neural operator approach which outperformed all baseline and other graph network approaches.
arXiv Detail & Related papers (2023-02-01T18:32:06Z) - Functional Connectome: Approximating Brain Networks with Artificial
Neural Networks [1.952097552284465]
We show that trained deep neural networks are able to capture the computations performed by synthetic biological networks with high accuracy.
We show that trained deep neural networks are able to perform zero-shot generalisation in novel environments.
Our study reveals a novel and promising direction in systems neuroscience, and can be expanded upon with a multitude of downstream applications.
arXiv Detail & Related papers (2022-11-23T13:12:13Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Extrapolation and Spectral Bias of Neural Nets with Hadamard Product: a
Polynomial Net Study [55.12108376616355]
The study on NTK has been devoted to typical neural network architectures, but is incomplete for neural networks with Hadamard products (NNs-Hp)
In this work, we derive the finite-width-K formulation for a special class of NNs-Hp, i.e., neural networks.
We prove their equivalence to the kernel regression predictor with the associated NTK, which expands the application scope of NTK.
arXiv Detail & Related papers (2022-09-16T06:36:06Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - A Comprehensive Survey on Community Detection with Deep Learning [93.40332347374712]
A community reveals the features and connections of its members that are different from those in other communities in a network.
This survey devises and proposes a new taxonomy covering different categories of the state-of-the-art methods.
The main category, i.e., deep neural networks, is further divided into convolutional networks, graph attention networks, generative adversarial networks and autoencoders.
arXiv Detail & Related papers (2021-05-26T14:37:07Z) - On Interpretability of Artificial Neural Networks: A Survey [21.905647127437685]
We systematically review recent studies in understanding the mechanism of neural networks, describe applications of interpretability especially in medicine.
We discuss future directions of interpretability research, such as in relation to fuzzy logic and brain science.
arXiv Detail & Related papers (2020-01-08T13:40:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.