Machine Learning Techniques to Detect and Characterise Whistler Radio
Waves
- URL: http://arxiv.org/abs/2002.01244v1
- Date: Tue, 4 Feb 2020 12:05:44 GMT
- Title: Machine Learning Techniques to Detect and Characterise Whistler Radio
Waves
- Authors: Othniel J.E.Y. Konan, Amit Kumar Mishra, Stefan Lotz
- Abstract summary: VLF antenna receivers can be used to detect whistler waves generated by lightning strokes.
The identification and characterisation of whistlers are important tasks to monitor the plasmasphere in real-time.
The aim of this work is to develop a machine learning-based model capable of automatically detecting whistlers in the data provided by the VLF receivers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Lightning strokes create powerful electromagnetic pulses that routinely cause
very low frequency (VLF) waves to propagate across hemispheres along
geomagnetic field lines. VLF antenna receivers can be used to detect these
whistler waves generated by these lightning strokes. The particular
time/frequency dependence of the received whistler wave enables the estimation
of electron density in the plasmasphere region of the magnetosphere. Therefore
the identification and characterisation of whistlers are important tasks to
monitor the plasmasphere in real-time and to build large databases of events to
be used for statistical studies. The current state of the art in detecting
whistler is the Automatic Whistler Detection (AWD) method developed by
Lichtenberger (2009). This method is based on image correlation in 2 dimensions
and requires significant computing hardware situated at the VLF receiver
antennas (e.g. in Antarctica). The aim of this work is to develop a machine
learning-based model capable of automatically detecting whistlers in the data
provided by the VLF receivers. The approach is to use a combination of image
classification and localisation on the spectrogram data generated by the VLF
receivers to identify and localise each whistler. The data at hand has around
2300 events identified by AWD at SANAE and Marion and will be used as training,
validation, and testing data. Three detector designs have been proposed. The
first one using a similar method to AWD, the second using image classification
on regions of interest extracted from a spectrogram, and the last one using
YOLO, the current state of the art in object detection. It has been shown that
these detectors can achieve a misdetection and false alarm of less than 15% on
Marion's dataset.
Related papers
- Frequency-Aware Deepfake Detection: Improving Generalizability through
Frequency Space Learning [81.98675881423131]
This research addresses the challenge of developing a universal deepfake detector that can effectively identify unseen deepfake images.
Existing frequency-based paradigms have relied on frequency-level artifacts introduced during the up-sampling in GAN pipelines to detect forgeries.
We introduce a novel frequency-aware approach called FreqNet, centered around frequency domain learning, specifically designed to enhance the generalizability of deepfake detectors.
arXiv Detail & Related papers (2024-03-12T01:28:00Z) - Novelty Detection on Radio Astronomy Data using Signatures [5.304803553490439]
We introduce SigNova, a new semi-supervised framework for detecting anomalies in streamed data.
We use the signature transform to extract a canonical collection of statistics from observational sequences.
Each feature vector is assigned a novelty score, calculated as the Mahalanobis distance to its nearest neighbor in an RFI-free training set.
arXiv Detail & Related papers (2024-02-22T14:13:44Z) - Generation of Realistic Synthetic Raw Radar Data for Automated Driving
Applications using Generative Adversarial Networks [0.0]
This work proposes a faster method for FMCW radar simulation capable of generating synthetic raw radar data using generative adversarial networks (GAN)
The code and pre-trained weights are open-source and available on GitHub.
Results have shown that the data is realistic in terms of coherent radar reflections of the motorcycle and background noise based on the comparison of chirps, the RA maps and the object detection results.
arXiv Detail & Related papers (2023-08-04T17:44:27Z) - Disentangled Representation Learning for RF Fingerprint Extraction under
Unknown Channel Statistics [77.13542705329328]
We propose a framework of disentangled representation learning(DRL) that first learns to factor the input signals into a device-relevant component and a device-irrelevant component via adversarial learning.
The implicit data augmentation in the proposed framework imposes a regularization on the RFF extractor to avoid the possible overfitting of device-irrelevant channel statistics.
Experiments validate that the proposed approach, referred to as DR-RFF, outperforms conventional methods in terms of generalizability to unknown complicated propagation environments.
arXiv Detail & Related papers (2022-08-04T15:46:48Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Toward Data-Driven STAP Radar [23.333816677794115]
We characterize our data-driven approach to space-time adaptive processing (STAP) radar.
We generate a rich example dataset of received radar signals by randomly placing targets of variable strengths in a predetermined region.
For each data sample within this region, we generate heatmap tensors in range, azimuth, and elevation of the output power of a beamformer.
In an airborne scenario, the moving radar creates a sequence of these time-indexed image stacks, resembling a video.
arXiv Detail & Related papers (2022-01-26T02:28:13Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - On the Frequency Bias of Generative Models [61.60834513380388]
We analyze proposed measures against high-frequency artifacts in state-of-the-art GAN training.
We find that none of the existing approaches can fully resolve spectral artifacts yet.
Our results suggest that there is great potential in improving the discriminator.
arXiv Detail & Related papers (2021-11-03T18:12:11Z) - Deep Learning Radio Frequency Signal Classification with Hybrid Images [0.0]
We focus on the different pre-processing steps that can be used on the input training data, and test the results on a fixed Deep Learning architecture.
We propose a hybrid image that takes advantage of both time and frequency domain information, and tackles the classification as a Computer Vision problem.
arXiv Detail & Related papers (2021-05-19T11:12:09Z) - Self-Supervised Person Detection in 2D Range Data using a Calibrated
Camera [83.31666463259849]
We propose a method to automatically generate training labels (called pseudo-labels) for 2D LiDAR-based person detectors.
We show that self-supervised detectors, trained or fine-tuned with pseudo-labels, outperform detectors trained using manual annotations.
Our method is an effective way to improve person detectors during deployment without any additional labeling effort.
arXiv Detail & Related papers (2020-12-16T12:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.