Cross-Breed Pig Identification Using Auricular Vein Pattern Recognition: A Machine Learning Approach for Small-Scale Farming Applications
- URL: http://arxiv.org/abs/2510.02197v1
- Date: Thu, 02 Oct 2025 16:45:43 GMT
- Title: Cross-Breed Pig Identification Using Auricular Vein Pattern Recognition: A Machine Learning Approach for Small-Scale Farming Applications
- Authors: Emmanuel Nsengiyumvaa, Leonard Niyitegekaa, Eric Umuhoza,
- Abstract summary: We propose a noninvasive biometric identification approach that leverages uniqueness of the auricular vein patterns.<n>A computer vision pipeline was developed to enhance vein visibility, extract structural and spatial features, and generate biometric signatures.<n>Support Vector Machines (SVM) achieved the highest accuracy: correctly identifying pigs with 98.12% precision across mixed-breed populations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurate livestock identification is a cornerstone of modern farming: it supports health monitoring, breeding programs, and productivity tracking. However, common pig identification methods, such as ear tags and microchips, are often unreliable, costly, target pure breeds, and thus impractical for small-scale farmers. To address this gap, we propose a noninvasive biometric identification approach that leverages uniqueness of the auricular vein patterns. To this end, we have collected 800 ear images from 20 mixed-breed pigs (Landrace cross Pietrain and Duroc cross Pietrain), captured using a standard smartphone and simple back lighting. A multistage computer vision pipeline was developed to enhance vein visibility, extract structural and spatial features, and generate biometric signatures. These features were then classified using machine learning models. Support Vector Machines (SVM) achieved the highest accuracy: correctly identifying pigs with 98.12% precision across mixed-breed populations. The entire process from image processing to classification was completed in an average of 8.3 seconds, demonstrating feasibility for real-time farm deployment. We believe that by replacing fragile physical identifiers with permanent biological markers, this system provides farmers with a cost-effective and stress-free method of animal identification. More broadly, the findings confirm the practicality of auricular vein biometrics for digitizing livestock management, reinforcing its potential to extend the benefits of precision farming to resource-constrained agricultural communities.
Related papers
- Automated Re-Identification of Holstein-Friesian Cattle in Dense Crowds [2.3843187053931456]
We propose a new detect-segment-identify pipeline that leverages the Open-Vocabulary Weight-free Localisation and the Segment Anything models.<n>Our methodology overcomes detection breakdown in dense animal groupings, resulting in a 98.93% accuracy.<n>We show that unsupervised contrastive learning can build on this to yield 94.82% Re-ID accuracy on our test data.
arXiv Detail & Related papers (2026-02-17T19:25:50Z) - Re-Identifying Kākā with AI-Automated Video Key Frame Extraction [0.0]
This study presents a unique pipeline for extracting high-quality key frames from videos of k=ak=a (Nestor meridionalis)<n>Using video recordings at a custom-built feeder, we extract key frames and evaluate the re-identification performance of our pipeline.<n>Results indicate that our proposed key frame selection methods yield image collections which achieve high accuracy in k=ak=a re-identification.
arXiv Detail & Related papers (2025-10-09T19:46:46Z) - Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder [48.03572115000886]
This study introduces a fully self-supervised approach to learning robust chimpanzee face embeddings from unlabeled camera-trap footage.<n>We train Vision Transformers on automatically mined face crops, eliminating the need for identity labels.<n>This work underscores the potential of self-supervised learning in biodiversity monitoring and paves the way for scalable, non-invasive population studies.
arXiv Detail & Related papers (2025-07-14T17:59:59Z) - Holstein-Friesian Re-Identification using Multiple Cameras and Self-Supervision on a Working Farm [2.9391768712283772]
We present MultiCamCows2024, a farm-scale image dataset filmed across multiple cameras for the biometric identification of individual Holstein-Friesian cattle.<n>The dataset comprises 101,329 images of 90 cows, plus underlying original CCTV footage.<n>We report a performance above 96% single image identification accuracy from the dataset and demonstrate that combining data from multiple cameras during learning enhances self-supervised identification.
arXiv Detail & Related papers (2024-10-16T15:58:47Z) - Occlusion-Resistant Instance Segmentation of Piglets in Farrowing Pens
Using Center Clustering Network [48.42863035798351]
We propose a novel Center Clustering Network for instance segmentation, dubbed as CClusnet-Inseg.
CClusnet-Inseg uses each pixel to predict object centers and trace these centers to form masks based on clustering results.
In all, 4,600 images were extracted from six videos collected from six farrowing pens to train and validate our method.
arXiv Detail & Related papers (2022-06-04T08:43:30Z) - A Competitive Method for Dog Nose-print Re-identification [46.94755073943372]
This paper presents our proposed methods for dog nose-print authentication (Re-ID) task in CVPR 2022 pet biometric challenge.
With multiple models ensembled adopted, our methods achieve 86.67% AUC on the test set.
arXiv Detail & Related papers (2022-05-31T16:26:46Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z) - Livestock Monitoring with Transformer [4.298326853567677]
We develop an end-to-end behaviour monitoring system for group-housed pigs to perform simultaneous instance level segmentation, tracking, action recognition and re-identification tasks.
We present starformer, the first end-to-end multiple-object livestock monitoring framework that learns instance-level embeddings for grouped pigs through the use of transformer architecture.
arXiv Detail & Related papers (2021-11-01T10:03:49Z) - Visual Identification of Individual Holstein-Friesian Cattle via Deep
Metric Learning [8.784100314325395]
Holstein-Friesian cattle exhibit individually-characteristic black and white coat patterns visually akin to those arising from Turing's reaction-diffusion systems.
This work takes advantage of these natural markings in order to automate visual detection and biometric identification of individual Holstein-Friesians via convolutional neural networks and deep metric learning techniques.
arXiv Detail & Related papers (2020-06-16T14:41:55Z) - Transferring Dense Pose to Proximal Animal Classes [83.84439508978126]
We show that it is possible to transfer the knowledge existing in dense pose recognition for humans, as well as in more general object detectors and segmenters, to the problem of dense pose recognition in other classes.
We do this by establishing a DensePose model for the new animal which is also geometrically aligned to humans.
We also introduce two benchmark datasets labelled in the manner of DensePose for the class chimpanzee and use them to evaluate our approach.
arXiv Detail & Related papers (2020-02-28T21:43:53Z) - Automatic image-based identification and biomass estimation of
invertebrates [70.08255822611812]
Time-consuming sorting and identification of taxa pose strong limitations on how many insect samples can be processed.
We propose to replace the standard manual approach of human expert-based sorting and identification with an automatic image-based technology.
We use state-of-the-art Resnet-50 and InceptionV3 CNNs for the classification task.
arXiv Detail & Related papers (2020-02-05T21:38:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.