A model-agnostic active learning approach for animal detection from camera traps
- URL: http://arxiv.org/abs/2507.06537v1
- Date: Wed, 09 Jul 2025 04:36:59 GMT
- Title: A model-agnostic active learning approach for animal detection from camera traps
- Authors: Thi Thu Thuy Nguyen, Duc Thanh Nguyen,
- Abstract summary: We propose a model-agnostic active learning approach for detection of animals captured by camera traps.<n>Our approach integrates uncertainty and diversity quantities of samples at both the object-based and image-based levels into the active learning sample selection process.
- Score: 6.521571185874872
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Smart data selection is becoming increasingly important in data-driven machine learning. Active learning offers a promising solution by allowing machine learning models to be effectively trained with optimal data including the most informative samples from large datasets. Wildlife data captured by camera traps are excessive in volume, requiring tremendous effort in data labelling and animal detection models training. Therefore, applying active learning to optimise the amount of labelled data would be a great aid in enabling automated wildlife monitoring and conservation. However, existing active learning techniques require that a machine learning model (i.e., an object detector) be fully accessible, limiting the applicability of the techniques. In this paper, we propose a model-agnostic active learning approach for detection of animals captured by camera traps. Our approach integrates uncertainty and diversity quantities of samples at both the object-based and image-based levels into the active learning sample selection process. We validate our approach in a benchmark animal dataset. Experimental results demonstrate that, using only 30% of the training data selected by our approach, a state-of-the-art animal detector can achieve a performance of equal or greater than that with the use of the complete training dataset.
Related papers
- Benchmarking pig detection and tracking under diverse and challenging conditions [1.865175170209582]
We curated two datasets: PigDetect for object detection and PigTrack for multi-object tracking.<n>For object detection, we show that challenging training images improve detection beyond what is achievable with randomly sampled images alone.<n>For multi-object tracking, we observed that SORT-based methods achieve superior detection performance compared to end-to-end trainable models.
arXiv Detail & Related papers (2025-07-22T14:36:51Z) - Oriented Tiny Object Detection: A Dataset, Benchmark, and Dynamic Unbiased Learning [51.170479006249195]
We introduce a new dataset, benchmark, and a dynamic coarse-to-fine learning scheme in this study.<n>Our proposed dataset, AI-TOD-R, features the smallest object sizes among all oriented object detection datasets.<n>We present a benchmark spanning a broad range of detection paradigms, including both fully-supervised and label-efficient approaches.
arXiv Detail & Related papers (2024-12-16T09:14:32Z) - Improved detection of discarded fish species through BoxAL active learning [0.2544632696242629]
In this study, we present an active learning technique, named BoxAL, which includes estimation of epistemic certainty of the Faster R-CNN object-detection model.
The method allows selecting the most uncertain training images from an unlabeled pool, which are then used to train the object-detection model.
Our study additionally showed that the sampled new data is more valuable for training than the remaining unlabeled data.
arXiv Detail & Related papers (2024-10-07T10:01:30Z) - Rethinking Transformers Pre-training for Multi-Spectral Satellite
Imagery [78.43828998065071]
Recent advances in unsupervised learning have demonstrated the ability of large vision models to achieve promising results on downstream tasks.
Such pre-training techniques have also been explored recently in the remote sensing domain due to the availability of large amount of unlabelled data.
In this paper, we re-visit transformers pre-training and leverage multi-scale information that is effectively utilized with multiple modalities.
arXiv Detail & Related papers (2024-03-08T16:18:04Z) - ALP: Action-Aware Embodied Learning for Perception [60.64801970249279]
We introduce Action-Aware Embodied Learning for Perception (ALP)
ALP incorporates action information into representation learning through a combination of optimizing a reinforcement learning policy and an inverse dynamics prediction objective.
We show that ALP outperforms existing baselines in several downstream perception tasks.
arXiv Detail & Related papers (2023-06-16T21:51:04Z) - Automated wildlife image classification: An active learning tool for
ecological applications [0.44970015278813025]
Wildlife camera trap images are being used extensively to investigate animal abundance, habitat associations, and behavior.
Artificial intelligence systems can take over this task but usually need a large number of already-labeled training images to achieve sufficient performance.
We propose a label-efficient learning strategy that enables researchers with small or medium-sized image databases to leverage the potential of modern machine learning.
arXiv Detail & Related papers (2023-03-28T08:51:15Z) - ALBench: A Framework for Evaluating Active Learning in Object Detection [102.81795062493536]
This paper contributes an active learning benchmark framework named as ALBench for evaluating active learning in object detection.
Developed on an automatic deep model training system, this ALBench framework is easy-to-use, compatible with different active learning algorithms, and ensures the same training and testing protocols.
arXiv Detail & Related papers (2022-07-27T07:46:23Z) - Zoo-Tuning: Adaptive Transfer from a Zoo of Models [82.9120546160422]
Zoo-Tuning learns to adaptively transfer the parameters of pretrained models to the target task.
We evaluate our approach on a variety of tasks, including reinforcement learning, image classification, and facial landmark detection.
arXiv Detail & Related papers (2021-06-29T14:09:45Z) - Deep learning with self-supervision and uncertainty regularization to
count fish in underwater images [28.261323753321328]
Effective conservation actions require effective population monitoring.
Monitoring populations through image sampling has made data collection cheaper, wide-reaching and less intrusive.
Counting animals from such data is challenging, particularly when densely packed in noisy images.
Deep learning is the state-of-the-art method for many computer vision tasks, but it has yet to be properly explored to count animals.
arXiv Detail & Related papers (2021-04-30T13:02:19Z) - Few-Cost Salient Object Detection with Adversarial-Paced Learning [95.0220555274653]
This paper proposes to learn the effective salient object detection model based on the manual annotation on a few training images only.
We name this task as the few-cost salient object detection and propose an adversarial-paced learning (APL)-based framework to facilitate the few-cost learning scenario.
arXiv Detail & Related papers (2021-04-05T14:15:49Z) - An Efficient Data Imputation Technique for Human Activity Recognition [3.0117625632585705]
We propose a methodology for extrapolating missing samples of a dataset to better recognize the human daily living activities.
The proposed method efficiently pre-processes the data captures and utilizes the k-Nearest Neighbors (KNN) imputation technique.
The proposed methodology elegantly extrapolated a similar pattern of activities as they were in the real dataset.
arXiv Detail & Related papers (2020-07-08T22:05:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.