PawPrint: Whose Footprints Are These? Identifying Animal Individuals by Their Footprints
- URL: http://arxiv.org/abs/2505.17445v1
- Date: Fri, 23 May 2025 04:02:04 GMT
- Title: PawPrint: Whose Footprints Are These? Identifying Animal Individuals by Their Footprints
- Authors: Inpyo Song, Hyemin Hwang, Jangwon Lee,
- Abstract summary: PawPrint and PawPrint+ are the first publicly available datasets focused on individual-level footprint identification for dogs and cats.<n>We observe varying advantages and drawbacks depending on substrate complexity and data availability.<n>As this approach provides a non-invasive alternative to traditional ID tags, we anticipate promising applications in ethical pet management and wildlife conservation efforts.
- Score: 2.651771159687148
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the United States, as of 2023, pet ownership has reached 66% of households and continues to rise annually. This trend underscores the critical need for effective pet identification and monitoring methods, particularly as nearly 10 million cats and dogs are reported stolen or lost each year. However, traditional methods for finding lost animals like GPS tags or ID photos have limitations-they can be removed, face signal issues, and depend on someone finding and reporting the pet. To address these limitations, we introduce PawPrint and PawPrint+, the first publicly available datasets focused on individual-level footprint identification for dogs and cats. Through comprehensive benchmarking of both modern deep neural networks (e.g., CNN, Transformers) and classical local features, we observe varying advantages and drawbacks depending on substrate complexity and data availability. These insights suggest future directions for combining learned global representations with local descriptors to enhance reliability across diverse, real-world conditions. As this approach provides a non-invasive alternative to traditional ID tags, we anticipate promising applications in ethical pet management and wildlife conservation efforts.
Related papers
- Self-supervised Learning on Camera Trap Footage Yields a Strong Universal Face Embedder [48.03572115000886]
This study introduces a fully self-supervised approach to learning robust chimpanzee face embeddings from unlabeled camera-trap footage.<n>We train Vision Transformers on automatically mined face crops, eliminating the need for identity labels.<n>This work underscores the potential of self-supervised learning in biodiversity monitoring and paves the way for scalable, non-invasive population studies.
arXiv Detail & Related papers (2025-07-14T17:59:59Z) - PetFace: A Large-Scale Dataset and Benchmark for Animal Identification [2.3020018305241337]
We introduce the PetFace dataset, a comprehensive resource for animal face identification.
PetFace includes 257,484 unique individuals across 13 animal families and 319 breed categories, including both experimental and pet animals.
We provide benchmarks including re-identification for seen individuals and verification for unseen individuals.
arXiv Detail & Related papers (2024-07-18T14:28:31Z) - OpenAnimalTracks: A Dataset for Animal Track Recognition [2.3020018305241337]
We introduce OpenAnimalTracks dataset, the first publicly available labeled dataset designed to facilitate the automated classification and detection of animal footprints.
We show the potential of automated footprint identification with representative classifiers and detection models.
We hope our dataset paves the way for automated animal tracking techniques, enhancing our ability to protect and manage biodiversity.
arXiv Detail & Related papers (2024-06-14T00:37:17Z) - Multimodal Foundation Models for Zero-shot Animal Species Recognition in
Camera Trap Images [57.96659470133514]
Motion-activated camera traps constitute an efficient tool for tracking and monitoring wildlife populations across the globe.
Supervised learning techniques have been successfully deployed to analyze such imagery, however training such techniques requires annotations from experts.
Reducing the reliance on costly labelled data has immense potential in developing large-scale wildlife tracking solutions with markedly less human labor.
arXiv Detail & Related papers (2023-11-02T08:32:00Z) - LostPaw: Finding Lost Pets using a Contrastive Learning-based Transformer with Visual Input [0.24578723416255752]
This study introduces a contrastive neural network model capable of accurately distinguishing between images of pets.<n>The model was trained on a large dataset of dog images and evaluated through 3-fold cross-validation.<n>Our findings suggest that contrastive neural network models hold promise as a tool for locating lost pets.
arXiv Detail & Related papers (2023-04-28T11:23:44Z) - A Competitive Method for Dog Nose-print Re-identification [46.94755073943372]
This paper presents our proposed methods for dog nose-print authentication (Re-ID) task in CVPR 2022 pet biometric challenge.
With multiple models ensembled adopted, our methods achieve 86.67% AUC on the test set.
arXiv Detail & Related papers (2022-05-31T16:26:46Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z) - AP-10K: A Benchmark for Animal Pose Estimation in the Wild [83.17759850662826]
We propose AP-10K, the first large-scale benchmark for general animal pose estimation.
AP-10K consists of 10,015 images collected and filtered from 23 animal families and 60 species.
Results provide sound empirical evidence on the superiority of learning from diverse animals species in terms of both accuracy and generalization ability.
arXiv Detail & Related papers (2021-08-28T10:23:34Z) - Dog Identification using Soft Biometrics and Neural Networks [1.2922946578413577]
We apply advanced machine learning models such as deep neural network on the photographs of pets in order to determine the pet identity.
We explore the possibility of using different types of "soft" biometrics, such as breed, height, or gender, in fusion with "hard" biometrics such as photographs of the pet's face.
The proposed network is able to achieve an accuracy of 90.80% and 91.29% when differentiating between the two dog breeds.
arXiv Detail & Related papers (2020-07-22T10:22:46Z) - Transferring Dense Pose to Proximal Animal Classes [83.84439508978126]
We show that it is possible to transfer the knowledge existing in dense pose recognition for humans, as well as in more general object detectors and segmenters, to the problem of dense pose recognition in other classes.
We do this by establishing a DensePose model for the new animal which is also geometrically aligned to humans.
We also introduce two benchmark datasets labelled in the manner of DensePose for the class chimpanzee and use them to evaluate our approach.
arXiv Detail & Related papers (2020-02-28T21:43:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.