Comparing Male Nyala and Male Kudu Classification using Transfer
Learning with ResNet-50 and VGG-16
- URL: http://arxiv.org/abs/2311.05981v1
- Date: Fri, 10 Nov 2023 10:43:46 GMT
- Title: Comparing Male Nyala and Male Kudu Classification using Transfer
Learning with ResNet-50 and VGG-16
- Authors: T.T Lemani and T.L. van Zyl
- Abstract summary: This paper investigates the efficiency of pre-trained models, specifically the VGG-16 and ResNet-50 model, in identifying a male Kudu and a male Nyala in their natural habitats.
The experimental results achieved an accuracy of 93.2% and 97.7% for the VGG-16 and ResNet-50 models, respectively, before fine-tuning and 97.7% for both models after fine-tuning.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Reliable and efficient monitoring of wild animals is crucial to inform
management and conservation decisions. The process of manually identifying
species of animals is time-consuming, monotonous, and expensive. Leveraging on
advances in deep learning and computer vision, we investigate in this paper the
efficiency of pre-trained models, specifically the VGG-16 and ResNet-50 model,
in identifying a male Kudu and a male Nyala in their natural habitats. These
pre-trained models have proven to be efficient in animal identification in
general. Still, there is little research on animals like the Kudu and Nyala,
who are usually well camouflaged and have similar features. The method of
transfer learning used in this paper is the fine-tuning method. The models are
evaluated before and after fine-tuning. The experimental results achieved an
accuracy of 93.2\% and 97.7\% for the VGG-16 and ResNet-50 models,
respectively, before fine-tuning and 97.7\% for both models after fine-tuning.
Although these results are impressive, it should be noted that they were taken
over a small sample size of 550 images split in half between the two classes;
therefore, this might not cater to enough scenarios to get a full conclusion of
the efficiency of the models. Therefore, there is room for more work in getting
a more extensive dataset and testing and extending to the female counterparts
of these species and the whole antelope species.
Related papers
- Evaluating Deep Learning Models for African Wildlife Image Classification: From DenseNet to Vision Transformers [3.4801331938495705]
Wildlife populations in Africa face severe threats, with vertebrate numbers declining by over 65% in the past five decades.<n>In response, image classification using deep learning has emerged as a promising tool for biodiversity monitoring and conservation.<n>This paper presents a comparative study of deep learning models for automatically classifying African wildlife images.
arXiv Detail & Related papers (2025-07-28T22:18:13Z) - Transfer Learning for Wildlife Classification: Evaluating YOLOv8 against DenseNet, ResNet, and VGGNet on a Custom Dataset [0.0]
The study utilizes transfer learning to fine-tune pre-trained models on the dataset.
YOLOv8 outperforms other models, achieving a training accuracy of 97.39% and a validation F1-score of 96.50%.
arXiv Detail & Related papers (2024-07-10T15:03:00Z) - Cattle Identification Using Muzzle Images and Deep Learning Techniques [0.0]
This project explores cattle identification using 4923 muzzle images collected from 268 beef cattle.
From the experiments run, a maximum accuracy of 99.5% is achieved while using the wide ResNet50 model.
arXiv Detail & Related papers (2023-11-14T13:25:41Z) - General vs. Long-Tailed Age Estimation: An Approach to Kill Two Birds
with One Stone [48.849311629912734]
We propose a simple, effective, and flexible training paradigm named GLAE, which is two-fold.
Our GLAE provides a surprising improvement on Morph II, reaching the lowest MAE and CMAE of 1.14 and 1.27 years, respectively.
arXiv Detail & Related papers (2023-07-19T16:51:59Z) - Learning to Jump: Thinning and Thickening Latent Counts for Generative
Modeling [69.60713300418467]
Learning to jump is a general recipe for generative modeling of various types of data.
We demonstrate when learning to jump is expected to perform comparably to learning to denoise, and when it is expected to perform better.
arXiv Detail & Related papers (2023-05-28T05:38:28Z) - LIMA: Less Is More for Alignment [112.93890201395477]
We train LIMA, a 65B parameter LLaMa language model fine-tuned with the standard supervised loss on only 1,000 carefully curated prompts and responses.
LIMA demonstrates remarkably strong performance, learning to follow specific response formats from only a handful of examples.
In a controlled human study, responses from LIMA are either equivalent or strictly preferred to GPT-4 in 43% of cases.
arXiv Detail & Related papers (2023-05-18T17:45:22Z) - Rare Wildlife Recognition with Self-Supervised Representation Learning [0.0]
We present a methodology to reduce the amount of required training data by resorting to self-supervised pretraining.
We show that a combination of MoCo, CLD, and geometric augmentations outperforms conventional models pretrained on ImageNet by a large margin.
arXiv Detail & Related papers (2022-10-29T17:57:38Z) - Bag of Tricks for Long-Tail Visual Recognition of Animal Species in
Camera Trap Images [2.294014185517203]
We evaluate recently proposed techniques to address the long-tail visual recognition of animal species in camera trap images.
In general, the square-root sampling was the method that most improved the performance for minority classes by around 10%.
The proposed approach achieved the best trade-off between the performance of the tail class and the cost of the head classes' accuracy.
arXiv Detail & Related papers (2022-06-24T18:30:26Z) - Ensembling Off-the-shelf Models for GAN Training [55.34705213104182]
We find that pretrained computer vision models can significantly improve performance when used in an ensemble of discriminators.
We propose an effective selection mechanism, by probing the linear separability between real and fake samples in pretrained model embeddings.
Our method can improve GAN training in both limited data and large-scale settings.
arXiv Detail & Related papers (2021-12-16T18:59:50Z) - AP-10K: A Benchmark for Animal Pose Estimation in the Wild [83.17759850662826]
We propose AP-10K, the first large-scale benchmark for general animal pose estimation.
AP-10K consists of 10,015 images collected and filtered from 23 animal families and 60 species.
Results provide sound empirical evidence on the superiority of learning from diverse animals species in terms of both accuracy and generalization ability.
arXiv Detail & Related papers (2021-08-28T10:23:34Z) - Self-Supervised Pretraining and Controlled Augmentation Improve Rare
Wildlife Recognition in UAV Images [9.220908533011068]
We present a methodology to reduce the amount of required training data by resorting to self-supervised pretraining.
We show that a combination of MoCo, CLD, and geometric augmentations outperforms conventional models pre-trained on ImageNet by a large margin.
arXiv Detail & Related papers (2021-08-17T12:14:28Z) - Bag of Instances Aggregation Boosts Self-supervised Learning [122.61914701794296]
We propose a simple but effective distillation strategy for unsupervised learning.
Our method, termed as BINGO, targets at transferring the relationship learned by the teacher to the student.
BINGO achieves new state-of-the-art performance on small scale models.
arXiv Detail & Related papers (2021-07-04T17:33:59Z) - Zoo-Tuning: Adaptive Transfer from a Zoo of Models [82.9120546160422]
Zoo-Tuning learns to adaptively transfer the parameters of pretrained models to the target task.
We evaluate our approach on a variety of tasks, including reinforcement learning, image classification, and facial landmark detection.
arXiv Detail & Related papers (2021-06-29T14:09:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.