Detection of Animal Movement from Weather Radar using Self-Supervised Learning
- URL: http://arxiv.org/abs/2408.04424v1
- Date: Thu, 8 Aug 2024 12:48:54 GMT
- Title: Detection of Animal Movement from Weather Radar using Self-Supervised Learning
- Authors: Mubin Ul Haque, Joel Janek Dabrowski, Rebecca M. Rogers, Hazel Parry,
- Abstract summary: We propose a self-supervised learning method for detecting animal movement.
In our proposed method, we pre-train our model on a large dataset with noisy labels produced by a threshold approach.
Our experiments on Australian weather radar data for waterbird segmentation show that the proposed method outperforms the current state-of-the art approach by 43.53% in the dice co-efficient statistic.
- Score: 0.14999444543328289
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Detecting flying animals (e.g., birds, bats, and insects) using weather radar helps gain insights into animal movement and migration patterns, aids in management efforts (such as biosecurity) and enhances our understanding of the ecosystem.The conventional approach to detecting animals in weather radar involves thresholding: defining and applying thresholds for the radar variables, based on expert opinion. More recently, Deep Learning approaches have been shown to provide improved performance in detection. However, obtaining sufficient labelled weather radar data for flying animals to build learning-based models is time-consuming and labor-intensive. To address the challenge of data labelling, we propose a self-supervised learning method for detecting animal movement. In our proposed method, we pre-train our model on a large dataset with noisy labels produced by a threshold approach. The key advantage is that the pre-trained dataset size is limited only by the number of radar images available. We then fine-tune the model on a small human-labelled dataset. Our experiments on Australian weather radar data for waterbird segmentation show that the proposed method outperforms the current state-of-the art approach by 43.53% in the dice co-efficient statistic.
Related papers
- Multi-stage Learning for Radar Pulse Activity Segmentation [51.781832424705094]
Radio signal recognition is a crucial function in electronic warfare.
Precise identification and localisation of radar pulse activities are required by electronic warfare systems.
Deep learning-based radar pulse activity recognition methods have remained largely underexplored.
arXiv Detail & Related papers (2023-12-15T01:56:27Z) - RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection
Model [13.214257841152033]
Radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception.
We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning.
Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy.
arXiv Detail & Related papers (2023-04-17T17:07:35Z) - TempNet: Temporal Attention Towards the Detection of Animal Behaviour in
Videos [63.85815474157357]
We propose an efficient computer vision- and deep learning-based method for the detection of biological behaviours in videos.
TempNet uses an encoder bridge and residual blocks to maintain model performance with a two-staged, spatial, then temporal, encoder.
We demonstrate its application to the detection of sablefish (Anoplopoma fimbria) startle events.
arXiv Detail & Related papers (2022-11-17T23:55:12Z) - Physics-informed inference of aerial animal movements from weather radar
data [0.0]
We tackle the problem of reconstructing movement patterns from available radar data.
A convolutional decoder maps the inferred latent system states back to the physical space in which the known radar observation model can be applied.
Experiments on synthetic radar data show promising results in terms of reconstruction quality and data-efficiency.
arXiv Detail & Related papers (2022-11-08T20:20:52Z) - Contrastive Learning for Unsupervised Radar Place Recognition [31.04172735067443]
We learn, in an unsupervised way, an embedding from sequences of radar images that is suitable for solving the place recognition problem with complex radar data.
We experiment across two prominent urban radar datasets totalling over 400 km of driving and show that we achieve a new radar place recognition state-of-the-art.
arXiv Detail & Related papers (2021-10-06T13:34:09Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Multi-View Radar Semantic Segmentation [3.2093811507874768]
Automotive radars are low-cost active sensors that measure properties of surrounding objects.
They are seldom used for scene understanding due to the size and complexity of radar raw data.
We propose several novel architectures, and their associated losses, which analyse multiple "views" of the range-angle-Doppler radar tensor to segment it semantically.
arXiv Detail & Related papers (2021-03-30T09:56:41Z) - LiRaNet: End-to-End Trajectory Prediction using Spatio-Temporal Radar
Fusion [52.59664614744447]
We present LiRaNet, a novel end-to-end trajectory prediction method which utilizes radar sensor information along with widely used lidar and high definition (HD) maps.
automotive radar provides rich, complementary information, allowing for longer range vehicle detection as well as instantaneous velocity measurements.
arXiv Detail & Related papers (2020-10-02T00:13:00Z) - Depth Estimation from Monocular Images and Sparse Radar Data [93.70524512061318]
In this paper, we explore the possibility of achieving a more accurate depth estimation by fusing monocular images and Radar points using a deep neural network.
We find that the noise existing in Radar measurements is one of the main key reasons that prevents one from applying the existing fusion methods.
The experiments are conducted on the nuScenes dataset, which is one of the first datasets which features Camera, Radar, and LiDAR recordings in diverse scenes and weather conditions.
arXiv Detail & Related papers (2020-09-30T19:01:33Z) - RadarNet: Exploiting Radar for Robust Perception of Dynamic Objects [73.80316195652493]
We tackle the problem of exploiting Radar for perception in the context of self-driving cars.
We propose a new solution that exploits both LiDAR and Radar sensors for perception.
Our approach, dubbed RadarNet, features a voxel-based early fusion and an attention-based late fusion.
arXiv Detail & Related papers (2020-07-28T17:15:02Z) - Detecting and Tracking Communal Bird Roosts in Weather Radar Data [31.330559694218564]
This paper describes a machine learning system to detect and track roost signatures in weather radar data.
System detects previously unknown roosting locations and provides comprehensive-temporal data about roosts across the US.
arXiv Detail & Related papers (2020-04-24T02:40:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.