LostPaw: Finding Lost Pets using a Contrastive Learning-based
Transformer with Visual Input
- URL: http://arxiv.org/abs/2304.14765v1
- Date: Fri, 28 Apr 2023 11:23:44 GMT
- Title: LostPaw: Finding Lost Pets using a Contrastive Learning-based
Transformer with Visual Input
- Authors: Andrei Voinea, Robin Kock, Maruf A. Dhali
- Abstract summary: This study introduces a contrastive neural network model capable of accurately distinguishing between images of pets.
The model was trained on a large dataset of dog images and evaluated through 3-fold cross-validation.
Our findings suggest that contrastive neural network models hold promise as a tool for locating lost pets.
- Score: 0.5156484100374059
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Losing pets can be highly distressing for pet owners, and finding a lost pet
is often challenging and time-consuming. An artificial intelligence-based
application can significantly improve the speed and accuracy of finding lost
pets. In order to facilitate such an application, this study introduces a
contrastive neural network model capable of accurately distinguishing between
images of pets. The model was trained on a large dataset of dog images and
evaluated through 3-fold cross-validation. Following 350 epochs of training,
the model achieved a test accuracy of 90%. Furthermore, overfitting was
avoided, as the test accuracy closely matched the training accuracy. Our
findings suggest that contrastive neural network models hold promise as a tool
for locating lost pets. This paper provides the foundation for a potential web
application that allows users to upload images of their missing pets, receiving
notifications when matching images are found in the application's image
database. This would enable pet owners to quickly and accurately locate lost
pets and reunite them with their families.
Related papers
- IgCONDA-PET: Implicitly-Guided Counterfactual Diffusion for Detecting Anomalies in PET Images [0.840320502420283]
Minimizing the need for pixel-level annotated data for training PET anomaly segmentation networks is crucial.
We present a weakly supervised and Implicitly guided COuNterfactual diffusion model for Detecting Anomalies in PET images.
arXiv Detail & Related papers (2024-04-30T23:09:54Z) - Cattle Identification Using Muzzle Images and Deep Learning Techniques [0.0]
This project explores cattle identification using 4923 muzzle images collected from 268 beef cattle.
From the experiments run, a maximum accuracy of 99.5% is achieved while using the wide ResNet50 model.
arXiv Detail & Related papers (2023-11-14T13:25:41Z) - Prior-Aware Synthetic Data to the Rescue: Animal Pose Estimation with
Very Limited Real Data [18.06492246414256]
We present a data efficient strategy for pose estimation in quadrupeds that requires only a small amount of real images from the target animal.
It is confirmed that fine-tuning a backbone network with pretrained weights on generic image datasets such as ImageNet can mitigate the high demand for target animal pose data.
We introduce a prior-aware synthetic animal data generation pipeline called PASyn to augment the animal pose data essential for robust pose estimation.
arXiv Detail & Related papers (2022-08-30T01:17:50Z) - Portuguese Man-of-War Image Classification with Convolutional Neural
Networks [58.720142291102135]
Portuguese man-of-war (PMW) is a gelatinous organism with long tentacles capable of causing severe burns.
This paper reports on the use of convolutional neural networks for recognizing PMW images from the Instagram social media.
arXiv Detail & Related papers (2022-07-04T03:06:45Z) - CLAMP: Prompt-based Contrastive Learning for Connecting Language and
Animal Pose [70.59906971581192]
We introduce a novel prompt-based Contrastive learning scheme for connecting Language and AniMal Pose effectively.
The CLAMP attempts to bridge the gap by adapting the text prompts to the animal keypoints during network training.
Experimental results show that our method achieves state-of-the-art performance under the supervised, few-shot, and zero-shot settings.
arXiv Detail & Related papers (2022-06-23T14:51:42Z) - A Competitive Method for Dog Nose-print Re-identification [46.94755073943372]
This paper presents our proposed methods for dog nose-print authentication (Re-ID) task in CVPR 2022 pet biometric challenge.
With multiple models ensembled adopted, our methods achieve 86.67% AUC on the test set.
arXiv Detail & Related papers (2022-05-31T16:26:46Z) - BARC: Learning to Regress 3D Dog Shape from Images by Exploiting Breed
Information [66.77206206569802]
Our goal is to recover the 3D shape and pose of dogs from a single image.
Recent work has proposed to directly regress the SMAL animal model, with additional limb scale parameters, from images.
Our method, called BARC (Breed-Augmented Regression using Classification), goes beyond prior work in several important ways.
This work shows that a-priori information about genetic similarity can help to compensate for the lack of 3D training data.
arXiv Detail & Related papers (2022-03-29T13:16:06Z) - Persistent Animal Identification Leveraging Non-Visual Markers [71.14999745312626]
We aim to locate and provide a unique identifier for each mouse in a cluttered home-cage environment through time.
This is a very challenging problem due to (i) the lack of distinguishing visual features for each mouse, and (ii) the close confines of the scene with constant occlusion.
Our approach achieves 77% accuracy on this animal identification problem, and is able to reject spurious detections when the animals are hidden.
arXiv Detail & Related papers (2021-12-13T17:11:32Z) - SyDog: A Synthetic Dog Dataset for Improved 2D Pose Estimation [3.411873646414169]
SyDog is a synthetic dataset of dogs containing ground truth pose and bounding box coordinates.
We demonstrate that pose estimation models trained on SyDog achieve better performance than models trained purely on real data.
arXiv Detail & Related papers (2021-07-31T14:34:40Z) - Deep Traffic Sign Detection and Recognition Without Target Domain Real
Images [52.079665469286496]
We propose a novel database generation method that requires no real image from the target-domain, and (ii) templates of the traffic signs.
The method does not aim at overcoming the training with real data, but to be a compatible alternative when the real data is not available.
On large data sets, training with a fully synthetic data set almost matches the performance of training with a real one.
arXiv Detail & Related papers (2020-07-30T21:06:47Z) - Identifying Individual Dogs in Social Media Images [1.14219428942199]
The work described here is part of joint project done with Pet2Net, a social network focused on pets and their owners.
In order to detect and recognize individual dogs we combine transfer learning and object detection approaches on Inception v3 and SSD Inception v2 architectures.
We show that it can achieve 94.59% accuracy in identifying individual dogs.
arXiv Detail & Related papers (2020-03-14T21:11:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.