Sparsistent filtering of comovement networks from high-dimensional data
- URL: http://arxiv.org/abs/2101.09174v1
- Date: Fri, 22 Jan 2021 15:44:41 GMT
- Title: Sparsistent filtering of comovement networks from high-dimensional data
- Authors: Arnab Chakrabarti and Anindya S. Chakrabarti
- Abstract summary: We introduce a new technique to filter large dimensional networks out of dynamical behavior of the constituent nodes.
As opposed to the well known network filters that rely on preserving key topological properties of the realized network, our method treats the spectrum as the fundamental object and preserves spectral properties.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Network filtering is an important form of dimension reduction to isolate the
core constituents of large and interconnected complex systems. We introduce a
new technique to filter large dimensional networks arising out of dynamical
behavior of the constituent nodes, exploiting their spectral properties. As
opposed to the well known network filters that rely on preserving key
topological properties of the realized network, our method treats the spectrum
as the fundamental object and preserves spectral properties. Applying
asymptotic theory for high dimensional data for the filter, we show that it can
be tuned to interpolate between zero filtering to maximal filtering that
induces sparsity and consistency while having the least spectral distance from
a linear shrinkage estimator. We apply our proposed filter to covariance
networks constructed from financial data, to extract the key subnetwork
embedded in the full sample network.
Related papers
- Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - Insights into Deep Non-linear Filters for Improved Multi-channel Speech
Enhancement [21.422488450492434]
In a traditional setting, linear spatial filtering (beamforming) and single-channel post-filtering are commonly performed separately.
There is a trend towards employing deep neural networks (DNNs) to learn a joint spatial and tempo-spectral non-linear filter.
arXiv Detail & Related papers (2022-06-27T13:54:14Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Learning Versatile Convolution Filters for Efficient Visual Recognition [125.34595948003745]
This paper introduces versatile filters to construct efficient convolutional neural networks.
We conduct theoretical analysis on network complexity and an efficient convolution scheme is introduced.
Experimental results on benchmark datasets and neural networks demonstrate that our versatile filters are able to achieve comparable accuracy as that of original filters.
arXiv Detail & Related papers (2021-09-20T06:07:14Z) - Message Passing in Graph Convolution Networks via Adaptive Filter Banks [81.12823274576274]
We present a novel graph convolution operator, termed BankGCN.
It decomposes multi-channel signals on graphs into subspaces and handles particular information in each subspace with an adapted filter.
It achieves excellent performance in graph classification on a collection of benchmark graph datasets.
arXiv Detail & Related papers (2021-06-18T04:23:34Z) - Unsharp Mask Guided Filtering [53.14430987860308]
The goal of this paper is guided image filtering, which emphasizes the importance of structure transfer during filtering.
We propose a new and simplified formulation of the guided filter inspired by unsharp masking.
Our formulation enjoys a filtering prior to a low-pass filter and enables explicit structure transfer by estimating a single coefficient.
arXiv Detail & Related papers (2021-06-02T19:15:34Z) - Generalized Approach to Matched Filtering using Neural Networks [4.535489275919893]
We make a key observation on the relationship between the emerging deep learning and the traditional techniques.
matched filtering is formally equivalent to a particular neural network.
We show that the proposed neural network architecture can outperform matched filtering.
arXiv Detail & Related papers (2021-04-08T17:59:07Z) - Dependency Aware Filter Pruning [74.69495455411987]
Pruning a proportion of unimportant filters is an efficient way to mitigate the inference cost.
Previous work prunes filters according to their weight norms or the corresponding batch-norm scaling factors.
We propose a novel mechanism to dynamically control the sparsity-inducing regularization so as to achieve the desired sparsity.
arXiv Detail & Related papers (2020-05-06T07:41:22Z) - MINT: Deep Network Compression via Mutual Information-based Neuron
Trimming [32.449324736645586]
Mutual Information-based Neuron Trimming (MINT) approaches deep compression via pruning.
MINT enforces sparsity based on the strength of the relationship between filters of adjacent layers.
When pruning a network, we ensure that retained filters contribute the majority of the information towards succeeding layers.
arXiv Detail & Related papers (2020-03-18T21:05:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.