Automatic Cattle Identification using YOLOv5 and Mosaic Augmentation: A
Comparative Analysis
- URL: http://arxiv.org/abs/2210.11939v1
- Date: Fri, 21 Oct 2022 13:13:40 GMT
- Title: Automatic Cattle Identification using YOLOv5 and Mosaic Augmentation: A
Comparative Analysis
- Authors: Rabin Dulal, Lihong Zheng, Muhammad Ashad Kabir, Shawn McGrath,
Jonathan Medway, Dave Swain, Will Swain
- Abstract summary: This paper investigates the YOLOv5 model to identify cattle in the yards.
Muzzle patterns in cattle are unique biometric solutions like a fingerprint in humans.
- Score: 2.161241370008739
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: You Only Look Once (YOLO) is a single-stage object detection model popular
for real-time object detection, accuracy, and speed. This paper investigates
the YOLOv5 model to identify cattle in the yards. The current solution to
cattle identification includes radio-frequency identification (RFID) tags. The
problem occurs when the RFID tag is lost or damaged. A biometric solution
identifies the cattle and helps to assign the lost or damaged tag or replace
the RFID-based system. Muzzle patterns in cattle are unique biometric solutions
like a fingerprint in humans. This paper aims to present our recent research in
utilizing five popular object detection models, looking at the architecture of
YOLOv5, investigating the performance of eight backbones with the YOLOv5 model,
and the influence of mosaic augmentation in YOLOv5 by experimental results on
the available cattle muzzle images. Finally, we concluded with the excellent
potential of using YOLOv5 in automatic cattle identification. Our experiments
show YOLOv5 with transformer performed best with mean Average Precision (mAP)
0.5 (the average of AP when the IoU is greater than 50%) of 0.995, and mAP
0.5:0.95 (the average of AP from 50% to 95% IoU with an interval of 5%) of
0.9366. In addition, our experiments show the increase in accuracy of the model
by using mosaic augmentation in all backbones used in our experiments.
Moreover, we can also detect cattle with partial muzzle images.
Related papers
- YOLOv10: Real-Time End-to-End Object Detection [68.28699631793967]
YOLOs have emerged as the predominant paradigm in the field of real-time object detection.
The reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs.
We introduce the holistic efficiency-accuracy driven model design strategy for YOLOs.
arXiv Detail & Related papers (2024-05-23T11:44:29Z) - YOLOv5 vs. YOLOv8 in Marine Fisheries: Balancing Class Detection and Instance Count [0.0]
This paper presents a comparative study of object detection using YOLOv5 and YOLOv8 for three distinct classes: artemia, cyst, and excrement.
YOLOv5 often performed better in detecting Artemia and cysts with excellent precision and accuracy.
However, when it came to detecting excrement, YOLOv5 faced notable challenges and limitations.
arXiv Detail & Related papers (2024-04-01T20:01:04Z) - Mask wearing object detection algorithm based on improved YOLOv5 [6.129833920546161]
This paper proposes a mask-wearing face detection model based on YOLOv5l.
Our proposed method significantly enhances the detection capability of mask-wearing.
arXiv Detail & Related papers (2023-10-16T10:06:42Z) - YOLO-MS: Rethinking Multi-Scale Representation Learning for Real-time
Object Detection [80.11152626362109]
We provide an efficient and performant object detector, termed YOLO-MS.
We train our YOLO-MS on the MS COCO dataset from scratch without relying on any other large-scale datasets.
Our work can also be used as a plug-and-play module for other YOLO models.
arXiv Detail & Related papers (2023-08-10T10:12:27Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - EdgeYOLO: An Edge-Real-Time Object Detector [69.41688769991482]
This paper proposes an efficient, low-complexity and anchor-free object detector based on the state-of-the-art YOLO framework.
We develop an enhanced data augmentation method to effectively suppress overfitting during training, and design a hybrid random loss function to improve the detection accuracy of small objects.
Our baseline model can reach the accuracy of 50.6% AP50:95 and 69.8% AP50 in MS 2017 dataset, 26.4% AP50:95 and 44.8% AP50 in VisDrone 2019-DET dataset, and it meets real-time requirements (FPS>=30) on edge-computing device Nvidia
arXiv Detail & Related papers (2023-02-15T06:05:14Z) - Comparison Of Deep Object Detectors On A New Vulnerable Pedestrian
Dataset [2.7624021966289605]
We introduce a new dataset for vulnerable pedestrian detection: the BG Vulnerable Pedestrian dataset.
This dataset consists of images collected from the public domain and manually-annotated bounding boxes.
On the proposed dataset, we have trained and tested five classic or state-of-the-art object detection models, i.e., YOLOv4, YOLOv5, YOLOX, Faster R-CNN, and EfficientDet.
arXiv Detail & Related papers (2022-12-12T19:59:47Z) - A lightweight and accurate YOLO-like network for small target detection
in Aerial Imagery [94.78943497436492]
We present YOLO-S, a simple, fast and efficient network for small target detection.
YOLO-S exploits a small feature extractor based on Darknet20, as well as skip connection, via both bypass and concatenation.
YOLO-S has an 87% decrease of parameter size and almost one half FLOPs of YOLOv3, making practical the deployment for low-power industrial applications.
arXiv Detail & Related papers (2022-04-05T16:29:49Z) - Evaluation of YOLO Models with Sliced Inference for Small Object
Detection [0.0]
This work aims to benchmark the YOLOv5 and YOLOX models for small object detection.
The effects of sliced fine-tuning and sliced inference combined produced substantial improvement for all models.
arXiv Detail & Related papers (2022-03-09T15:24:30Z) - COVID-19 Detection Using CT Image Based On YOLOv5 Network [31.848436570442704]
The dataset provided by Kaggle platform and we choose YOLOv5 as our model.
We introduce some methods on objective detection in the related work section.
The objection detection can be divided into two streams: onestage and two stage.
arXiv Detail & Related papers (2022-01-24T21:50:58Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.