Boosting the Efficiency of Parametric Detection with Hierarchical Neural
Networks
- URL: http://arxiv.org/abs/2207.11583v1
- Date: Sat, 23 Jul 2022 19:23:00 GMT
- Title: Boosting the Efficiency of Parametric Detection with Hierarchical Neural
Networks
- Authors: Jingkai Yan, Robert Colgan, John Wright, Zsuzsa M\'arka, Imre Bartos,
Szabolcs M\'arka
- Abstract summary: We propose Hierarchical Detection Network (HDN), a novel approach to efficient detection.
The network is trained using a novel loss function, which encodes simultaneously the goals of statistical accuracy and efficiency.
We show how training a three-layer HDN using two-layer model can further boost both accuracy and efficiency.
- Score: 4.1410005218338695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gravitational wave astronomy is a vibrant field that leverages both classic
and modern data processing techniques for the understanding of the universe.
Various approaches have been proposed for improving the efficiency of the
detection scheme, with hierarchical matched filtering being an important
strategy. Meanwhile, deep learning methods have recently demonstrated both
consistency with matched filtering methods and remarkable statistical
performance. In this work, we propose Hierarchical Detection Network (HDN), a
novel approach to efficient detection that combines ideas from hierarchical
matching and deep learning. The network is trained using a novel loss function,
which encodes simultaneously the goals of statistical accuracy and efficiency.
We discuss the source of complexity reduction of the proposed model, and
describe a general recipe for initialization with each layer specializing in
different regions. We demonstrate the performance of HDN with experiments using
open LIGO data and synthetic injections, and observe with two-layer models a
$79\%$ efficiency gain compared with matched filtering at an equal error rate
of $0.2\%$. Furthermore, we show how training a three-layer HDN initialized
using two-layer model can further boost both accuracy and efficiency,
highlighting the power of multiple simple layers in efficient detection.
Related papers
- Efficient learning of differential network in multi-source non-paranormal graphical models [2.5905193932831585]
This paper addresses learning of sparse structural changes or differential network between two classes of non-paranormal graphical models.
Our strategy in combining datasets from multiple sources is shown to be very effective in inferring differential network in real-world problems.
arXiv Detail & Related papers (2024-10-03T13:59:38Z) - Adaptive Anomaly Detection in Network Flows with Low-Rank Tensor Decompositions and Deep Unrolling [9.20186865054847]
Anomaly detection (AD) is increasingly recognized as a key component for ensuring the resilience of future communication systems.
This work considers AD in network flows using incomplete measurements.
We propose a novel block-successive convex approximation algorithm based on a regularized model-fitting objective.
Inspired by Bayesian approaches, we extend the model architecture to perform online adaptation to per-flow and per-time-step statistics.
arXiv Detail & Related papers (2024-09-17T19:59:57Z) - Anomaly Detection with Ensemble of Encoder and Decoder [2.8199078343161266]
Anomaly detection in power grids aims to detect and discriminate anomalies caused by cyber attacks against the power system.
We propose a novel anomaly detection method by modeling the data distribution of normal samples via multiple encoders and decoders.
Experiment results on network intrusion and power system datasets demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-03-11T15:49:29Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Efficient Few-Shot Object Detection via Knowledge Inheritance [62.36414544915032]
Few-shot object detection (FSOD) aims at learning a generic detector that can adapt to unseen tasks with scarce training samples.
We present an efficient pretrain-transfer framework (PTF) baseline with no computational increment.
We also propose an adaptive length re-scaling (ALR) strategy to alleviate the vector length inconsistency between the predicted novel weights and the pretrained base weights.
arXiv Detail & Related papers (2022-03-23T06:24:31Z) - CAFE: Learning to Condense Dataset by Aligning Features [72.99394941348757]
We propose a novel scheme to Condense dataset by Aligning FEatures (CAFE)
At the heart of our approach is an effective strategy to align features from the real and synthetic data across various scales.
We validate the proposed CAFE across various datasets, and demonstrate that it generally outperforms the state of the art.
arXiv Detail & Related papers (2022-03-03T05:58:49Z) - Hybridization of Capsule and LSTM Networks for unsupervised anomaly
detection on multivariate data [0.0]
This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network.
The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data.
arXiv Detail & Related papers (2022-02-11T10:33:53Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Learning to Generate Content-Aware Dynamic Detectors [62.74209921174237]
We introduce a newpective of designing efficient detectors, which is automatically generating sample-adaptive model architecture.
We introduce a course-to-fine strat-egy tailored for object detection to guide the learning of dynamic routing.
Experiments on MS-COCO dataset demonstrate that CADDet achieves 1.8 higher mAP with 10% fewer FLOPs compared with vanilla routing.
arXiv Detail & Related papers (2020-12-08T08:05:20Z) - Improving Sample Efficiency with Normalized RBF Kernels [0.0]
This paper explores how neural networks with normalized Radial Basis Function (RBF) kernels can be trained to achieve better sample efficiency.
We show how this kind of output layer can find embedding spaces where the classes are compact and well-separated.
Experiments on CIFAR-10 and CIFAR-100 show that networks with normalized kernels as output layer can achieve higher sample efficiency, high compactness and well-separability.
arXiv Detail & Related papers (2020-07-30T11:40:29Z) - One-Shot Object Detection without Fine-Tuning [62.39210447209698]
We introduce a two-stage model consisting of a first stage Matching-FCOS network and a second stage Structure-Aware Relation Module.
We also propose novel training strategies that effectively improve detection performance.
Our method exceeds the state-of-the-art one-shot performance consistently on multiple datasets.
arXiv Detail & Related papers (2020-05-08T01:59:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.