Policy-Driven Transfer Learning in Resource-Limited Animal Monitoring
- URL: http://arxiv.org/abs/2509.10995v1
- Date: Sat, 13 Sep 2025 22:26:51 GMT
- Title: Policy-Driven Transfer Learning in Resource-Limited Animal Monitoring
- Authors: Nisha Pillai, Aditi Virupakshaiah, Harrison W. Smith, Amanda J. Ashworth, Prasanna Gowda, Phillip R. Owens, Adam R. Rivers, Bindu Nanduri, Mahalingam Ramkumar,
- Abstract summary: Animal health monitoring and population management are critical aspects of wildlife conservation and livestock management.<n>Our framework achieves a higher detection rate while requiring significantly less computational time compared to traditional methods.
- Score: 0.6109833303919141
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Animal health monitoring and population management are critical aspects of wildlife conservation and livestock management that increasingly rely on automated detection and tracking systems. While Unmanned Aerial Vehicle (UAV) based systems combined with computer vision offer promising solutions for non-invasive animal monitoring across challenging terrains, limited availability of labeled training data remains an obstacle in developing effective deep learning (DL) models for these applications. Transfer learning has emerged as a potential solution, allowing models trained on large datasets to be adapted for resource-limited scenarios such as those with limited data. However, the vast landscape of pre-trained neural network architectures makes it challenging to select optimal models, particularly for researchers new to the field. In this paper, we propose a reinforcement learning (RL)-based transfer learning framework that employs an upper confidence bound (UCB) algorithm to automatically select the most suitable pre-trained model for animal detection tasks. Our approach systematically evaluates and ranks candidate models based on their performance, streamlining the model selection process. Experimental results demonstrate that our framework achieves a higher detection rate while requiring significantly less computational time compared to traditional methods.
Related papers
- Active Membership Inference Test (aMINT): Enhancing Model Auditability with Multi-Task Learning [18.552238031865286]
Active Membership Inference Test (aMINT) is a method designed to detect whether given data were used during the training of machine learning models.<n>We propose a novel multitask learning process that involves training simultaneously two models.<n>We present results using a wide range of neural networks, from lighter architectures such as MobileNet to more complex ones such as Vision Transformers.
arXiv Detail & Related papers (2025-09-09T16:00:03Z) - Benchmarking pig detection and tracking under diverse and challenging conditions [1.865175170209582]
We curated two datasets: PigDetect for object detection and PigTrack for multi-object tracking.<n>For object detection, we show that challenging training images improve detection beyond what is achievable with randomly sampled images alone.<n>For multi-object tracking, we observed that SORT-based methods achieve superior detection performance compared to end-to-end trainable models.
arXiv Detail & Related papers (2025-07-22T14:36:51Z) - A model-agnostic active learning approach for animal detection from camera traps [6.521571185874872]
We propose a model-agnostic active learning approach for detection of animals captured by camera traps.<n>Our approach integrates uncertainty and diversity quantities of samples at both the object-based and image-based levels into the active learning sample selection process.
arXiv Detail & Related papers (2025-07-09T04:36:59Z) - Diffusion-Based Neural Network Weights Generation [80.89706112736353]
D2NWG is a diffusion-based neural network weights generation technique that efficiently produces high-performing weights for transfer learning.
Our method extends generative hyper-representation learning to recast the latent diffusion paradigm for neural network weights generation.
Our approach is scalable to large architectures such as large language models (LLMs), overcoming the limitations of current parameter generation techniques.
arXiv Detail & Related papers (2024-02-28T08:34:23Z) - Multi-Agent Probabilistic Ensembles with Trajectory Sampling for Connected Autonomous Vehicles [12.71628954436973]
We propose a decentralized Multi-Agent Probabilistic Ensembles with Trajectory Sampling MA-PETS.
In particular, in order to better capture the uncertainty of the unknown environment, MA-PETS leverages Probabilistic Ensemble neural networks.
We empirically demonstrate the superiority of MA-PETS in terms of the sample efficiency comparable to MFBL.
arXiv Detail & Related papers (2023-12-21T14:55:21Z) - Battle of the Backbones: A Large-Scale Comparison of Pretrained Models
across Computer Vision Tasks [139.3768582233067]
Battle of the Backbones (BoB) is a benchmarking tool for neural network based computer vision systems.
We find that vision transformers (ViTs) and self-supervised learning (SSL) are increasingly popular.
In apples-to-apples comparisons on the same architectures and similarly sized pretraining datasets, we find that SSL backbones are highly competitive.
arXiv Detail & Related papers (2023-10-30T18:23:58Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [65.57123249246358]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.<n>On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.<n>On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - Automated wildlife image classification: An active learning tool for
ecological applications [0.44970015278813025]
Wildlife camera trap images are being used extensively to investigate animal abundance, habitat associations, and behavior.
Artificial intelligence systems can take over this task but usually need a large number of already-labeled training images to achieve sufficient performance.
We propose a label-efficient learning strategy that enables researchers with small or medium-sized image databases to leverage the potential of modern machine learning.
arXiv Detail & Related papers (2023-03-28T08:51:15Z) - Mastering the Unsupervised Reinforcement Learning Benchmark from Pixels [112.63440666617494]
Reinforcement learning algorithms can succeed but require large amounts of interactions between the agent and the environment.
We propose a new method to solve it, using unsupervised model-based RL, for pre-training the agent.
We show robust performance on the Real-Word RL benchmark, hinting at resiliency to environment perturbations during adaptation.
arXiv Detail & Related papers (2022-09-24T14:22:29Z) - Multitask Adaptation by Retrospective Exploration with Learned World
Models [77.34726150561087]
We propose a meta-learned addressing model called RAMa that provides training samples for the MBRL agent taken from task-agnostic storage.
The model is trained to maximize the expected agent's performance by selecting promising trajectories solving prior tasks from the storage.
arXiv Detail & Related papers (2021-10-25T20:02:57Z) - Zoo-Tuning: Adaptive Transfer from a Zoo of Models [82.9120546160422]
Zoo-Tuning learns to adaptively transfer the parameters of pretrained models to the target task.
We evaluate our approach on a variety of tasks, including reinforcement learning, image classification, and facial landmark detection.
arXiv Detail & Related papers (2021-06-29T14:09:45Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.