Adaptive Hoeffding Tree with Transfer Learning for Streaming Synchrophasor Data Sets
- URL: http://arxiv.org/abs/2501.16354v1
- Date: Sun, 19 Jan 2025 21:10:01 GMT
- Title: Adaptive Hoeffding Tree with Transfer Learning for Streaming Synchrophasor Data Sets
- Authors: Zakaria El Mrabet, Daisy Flora Selvaraj, Prakash Ranganathan,
- Abstract summary: This paper proposes a transfer learning-based hoeffding tree with ADWIN (THAT) method to detect anomalous synchrophasor signatures.
The proposed algorithm is trained and tested with the OzaBag method.
- Score: 0.0
- License:
- Abstract: Synchrophasor technology or phasor measurement units (PMUs) are known to detect multiple type of oscillations or faults better than Supervisory Control and Data Acquisition (SCADA) systems, but the volume of Bigdata (e.g., 30-120 samples per second on a single PMU) generated by these sensors at the aggregator level (e.g., several PMUs) requires special handling. Conventional machine learning or data mining methods are not suitable to handle such larger streaming realtime data. This is primarily due to latencies associated with cloud environments (e.g., at an aggregator or PDC level), and thus necessitates the need for local computing to move the data on the edge (or locally at the PMU level) for processing. This requires faster real-time streaming algorithms to be processed at the local level (e.g., typically by a Field Programmable Gate Array (FPGA) based controllers). This paper proposes a transfer learning-based hoeffding tree with ADWIN (THAT) method to detect anomalous synchrophasor signatures. The proposed algorithm is trained and tested with the OzaBag method. The preliminary results with transfer learning indicate that a computational time saving of 0.7ms is achieved with THAT algorithm (0.34ms) over Ozabag (1.04ms), while the accuracy of both methods in detecting fault events remains at 94% for four signatures.
Related papers
- MLGWSC-1: The first Machine Learning Gravitational-Wave Search Mock Data
Challenge [110.7678032481059]
We present the results of the first Machine Learning Gravitational-Wave Search Mock Data Challenge (MLGWSC-1).
For this challenge, participating groups had to identify gravitational-wave signals from binary black hole mergers of increasing complexity and duration embedded in progressively more realistic noise.
Our results show that current machine learning search algorithms may already be sensitive enough in limited parameter regions to be useful for some production settings.
arXiv Detail & Related papers (2022-09-22T16:44:59Z) - Decision Forest Based EMG Signal Classification with Low Volume Dataset
Augmented with Random Variance Gaussian Noise [51.76329821186873]
We produce a model that can classify six different hand gestures with a limited number of samples that generalizes well to a wider audience.
We appeal to a set of more elementary methods such as the use of random bounds on a signal, but desire to show the power these methods can carry in an online setting.
arXiv Detail & Related papers (2022-06-29T23:22:18Z) - Exploring Scalable, Distributed Real-Time Anomaly Detection for Bridge
Health Monitoring [15.920402427606959]
Modern real-time Structural Health Monitoring systems can generate a considerable amount of information.
Current cloud-based solutions cannot scale if the raw data has to be collected from thousands of buildings.
This paper presents a full-stack deployment of an efficient and scalable anomaly detection pipeline for SHM systems.
arXiv Detail & Related papers (2022-03-04T15:37:20Z) - Low Latency Real-Time Seizure Detection Using Transfer Deep Learning [0.0]
Scalp electroencephalogram (EEG) signals inherently have a low signal-to-noise ratio.
Most popular approaches to seizure detection using deep learning do not jointly model this information or require multiple passes over the signal.
In this paper, we exploit both simultaneously by converting the multichannel signal to a grayscale image and using transfer learning to achieve high performance.
arXiv Detail & Related papers (2022-02-16T00:03:00Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - CFLOW-AD: Real-Time Unsupervised Anomaly Detection with Localization via
Conditional Normalizing Flows [0.0]
We propose a real-time model for anomaly detection with localization.
CFLOW-AD consists of a discriminatively pretrained encoder followed by a multi-scale generative decoders.
Our experiments on the MVTec dataset show that CFLOW-AD outperforms previous methods by 0.36% AUROC in detection task, by 1.12% AUROC and 2.5% AUPRO in localization task, respectively.
arXiv Detail & Related papers (2021-07-27T03:10:38Z) - Turning Channel Noise into an Accelerator for Over-the-Air Principal
Component Analysis [65.31074639627226]
Principal component analysis (PCA) is a technique for extracting the linear structure of a dataset.
We propose the deployment of PCA over a multi-access channel based on the algorithm of gradient descent.
Over-the-air aggregation is adopted to reduce the multi-access latency, giving the name over-the-air PCA.
arXiv Detail & Related papers (2021-04-20T16:28:33Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Detection of gravitational-wave signals from binary neutron star mergers
using machine learning [52.77024349608834]
We introduce a novel neural-network based machine learning algorithm that uses time series strain data from gravitational-wave detectors.
We find an improvement by a factor of 6 in sensitivity to signals with signal-to-noise ratio below 25.
A conservative estimate indicates that our algorithm introduces on average 10.2 s of latency between signal arrival and generating an alert.
arXiv Detail & Related papers (2020-06-02T10:20:11Z) - Towards Efficient Scheduling of Federated Mobile Devices under
Computational and Statistical Heterogeneity [16.069182241512266]
This paper studies the implementation of distributed learning on mobile devices.
We use data as a tuning knob and propose two efficient-time algorithms to schedule different workloads.
Compared with the common benchmarks, the proposed algorithms achieve 2-100x speedup-wise, 2-7% accuracy gain and convergence rate by more than 100% on CIFAR10.
arXiv Detail & Related papers (2020-05-25T18:21:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.