An Improved Pure Fully Connected Neural Network for Rice Grain Classification
- URL: http://arxiv.org/abs/2503.03111v1
- Date: Wed, 05 Mar 2025 02:10:14 GMT
- Title: An Improved Pure Fully Connected Neural Network for Rice Grain Classification
- Authors: Wanke Xia, Ruoxin Peng, Haoqi Chu, Xinlei Zhu,
- Abstract summary: Deep learning has enabled automated classification of rice, improving accuracy and efficiency.<n> classical models based on first-stage training may face difficulties in distinguishing between rice varieties with similar external characteristics.<n>We propose two subtle methods to enhance the classification ability of deep learning models in terms of the classification of rice grain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Rice is a staple food for a significant portion of the world's population, providing essential nutrients and serving as a versatile in-gredient in a wide range of culinary traditions. Recently, the use of deep learning has enabled automated classification of rice, im-proving accuracy and efficiency. However, classical models based on first-stage training may face difficulties in distinguishing between rice varieties with similar external characteristics, thus leading to misclassifications. Considering the transparency and feasibility of model, we selected and gradually improved pure fully connected neural network to achieve classification of rice grain. The dataset we used contains both global and domestic rice images obtained from websites and laboratories respectively. First, the training mode was changed from one-stage training to two-stage training, which significantly contributes to distinguishing two similar types of rice. Secondly, the preprocessing method was changed from random tilting to horizontal or vertical position cor-rection. After those two enhancements, the accuracy of our model increased notably from 97% to 99%. In summary, two subtle methods proposed in this study can remarkably enhance the classification ability of deep learning models in terms of the classification of rice grain.
Related papers
- An Overall Real-Time Mechanism for Classification and Quality Evaluation of Rice [1.7034902216513157]
This study proposes a real-time evaluation mechanism for comprehensive rice grain assessment.<n>It integrates a one-stage object detection approach, a deep convolutional neural network, and traditional machine learning techniques.<n>The proposed framework enables rice variety identification, grain completeness grading, and grain chalkiness evaluation.
arXiv Detail & Related papers (2025-02-19T14:24:25Z) - Retrieval Augmented Recipe Generation [96.43285670458803]
We propose a retrieval augmented large multimodal model for recipe generation.<n>It retrieves recipes semantically related to the image from an existing datastore as a supplement.<n>It calculates the consistency among generated recipe candidates, which use different retrieval recipes as context for generation.
arXiv Detail & Related papers (2024-11-13T15:58:50Z) - A novel method for identifying rice seed purity based on hybrid machine learning algorithms [0.0]
In the grain industry, the identification of seed purity is a crucial task as it is an important factor in evaluating the quality of seeds.
This study proposes a novel method for automatically identifying the rice seed purity of a certain rice variety based on hybrid machine learning algorithms.
arXiv Detail & Related papers (2024-06-09T17:13:25Z) - Cell Phone Image-Based Persian Rice Detection and Classification Using Deep Learning Techniques [0.0]
This study introduces an innovative approach to classifying various types of Persian rice using image-based deep learning techniques.
We leveraged the capabilities of convolutional neural networks (CNNs), specifically by fine-tuning a ResNet model for accurate identification of different rice varieties.
This study contributes to the field by providing insights into the applicability of image-based deep learning in daily life.
arXiv Detail & Related papers (2024-04-21T07:03:48Z) - Transferring Knowledge for Food Image Segmentation using Transformers
and Convolutions [65.50975507723827]
Food image segmentation is an important task that has ubiquitous applications, such as estimating the nutritional value of a plate of food.
One challenge is that food items can overlap and mix, making them difficult to distinguish.
Two models are trained and compared, one based on convolutional neural networks and the other on Bidirectional representation for Image Transformers (BEiT)
The BEiT model outperforms the previous state-of-the-art model by achieving a mean intersection over union of 49.4 on FoodSeg103.
arXiv Detail & Related papers (2023-06-15T15:38:10Z) - Vision-Based Defect Classification and Weight Estimation of Rice Kernels [12.747541089354538]
We present an automatic visual quality estimation system of rice kernels, to classify the sampled rice kernels according to their types of flaws, and evaluate their quality via the weight ratios of the perspective kernel types.
We define a novel metric to measure the relative weight of each kernel in the image from its area, such that the relative weight of each type of kernels with regard to the all samples can be computed and used as the basis for rice quality estimation.
arXiv Detail & Related papers (2022-10-06T03:58:05Z) - Facilitated machine learning for image-based fruit quality assessment in
developing countries [68.8204255655161]
Automated image classification is a common task for supervised machine learning in food science.
We propose an alternative method based on pre-trained vision transformers (ViTs)
It can be easily implemented with limited resources on a standard device.
arXiv Detail & Related papers (2022-07-10T19:52:20Z) - Cross-lingual Adaptation for Recipe Retrieval with Mixup [56.79360103639741]
Cross-modal recipe retrieval has attracted research attention in recent years, thanks to the availability of large-scale paired data for training.
This paper studies unsupervised domain adaptation for image-to-recipe retrieval, where recipes in source and target domains are in different languages.
A novel recipe mixup method is proposed to learn transferable embedding features between the two domains.
arXiv Detail & Related papers (2022-05-08T15:04:39Z) - Online Continual Learning For Visual Food Classification [7.704949298975352]
Existing methods require static datasets for training and are not capable of learning from sequentially available new food images.
We introduce a novel clustering based exemplar selection algorithm to store the most representative data belonging to each learned food.
Our results show significant improvements compared with existing state-of-the-art online continual learning methods.
arXiv Detail & Related papers (2021-08-15T17:48:03Z) - Two-View Fine-grained Classification of Plant Species [66.75915278733197]
We propose a novel method based on a two-view leaf image representation and a hierarchical classification strategy for fine-grained recognition of plant species.
A deep metric based on Siamese convolutional neural networks is used to reduce the dependence on a large number of training samples and make the method scalable to new plant species.
arXiv Detail & Related papers (2020-05-18T21:57:47Z) - Cross-Modal Food Retrieval: Learning a Joint Embedding of Food Images
and Recipes with Semantic Consistency and Attention Mechanism [70.85894675131624]
We learn an embedding of images and recipes in a common feature space, such that the corresponding image-recipe embeddings lie close to one another.
We propose Semantic-Consistent and Attention-based Networks (SCAN), which regularize the embeddings of the two modalities through aligning output semantic probabilities.
We show that we can outperform several state-of-the-art cross-modal retrieval strategies for food images and cooking recipes by a significant margin.
arXiv Detail & Related papers (2020-03-09T07:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.