Network Estimation by Mixing: Adaptivity and More
- URL: http://arxiv.org/abs/2106.02803v1
- Date: Sat, 5 Jun 2021 05:17:04 GMT
- Title: Network Estimation by Mixing: Adaptivity and More
- Authors: Tianxi Li, Can M. Le
- Abstract summary: We propose a mixing strategy that leverages available arbitrary models to improve their individual performances.
The proposed method is computationally efficient and almost tuning-free.
We show that the proposed method performs equally well as the oracle estimate when the true model is included as individual candidates.
- Score: 2.3478438171452014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Networks analysis has been commonly used to study the interactions between
units of complex systems. One problem of particular interest is learning the
network's underlying connection pattern given a single and noisy instantiation.
While many methods have been proposed to address this problem in recent years,
they usually assume that the true model belongs to a known class, which is not
verifiable in most real-world applications. Consequently, network modeling
based on these methods either suffers from model misspecification or relies on
additional model selection procedures that are not well understood in theory
and can potentially be unstable in practice. To address this difficulty, we
propose a mixing strategy that leverages available arbitrary models to improve
their individual performances. The proposed method is computationally efficient
and almost tuning-free; thus, it can be used as an off-the-shelf method for
network modeling. We show that the proposed method performs equally well as the
oracle estimate when the true model is included as individual candidates. More
importantly, the method remains robust and outperforms all current estimates
even when the models are misspecified. Extensive simulation examples are used
to verify the advantage of the proposed mixing method. Evaluation of link
prediction performance on 385 real-world networks from six domains also
demonstrates the universal competitiveness of the mixing method across multiple
domains.
Related papers
- LoRA-Ensemble: Efficient Uncertainty Modelling for Self-attention Networks [52.46420522934253]
We introduce LoRA-Ensemble, a parameter-efficient deep ensemble method for self-attention networks.
By employing a single pre-trained self-attention network with weights shared across all members, we train member-specific low-rank matrices for the attention projections.
Our method exhibits superior calibration compared to explicit ensembles and achieves similar or better accuracy across various prediction tasks and datasets.
arXiv Detail & Related papers (2024-05-23T11:10:32Z) - Learning non-stationary and discontinuous functions using clustering,
classification and Gaussian process modelling [0.0]
We propose a three-stage approach for the approximation of non-smooth functions.
The idea is to split the space following the localized behaviors or regimes of the system and build local surrogates.
The approach is tested and validated on two analytical functions and a finite element model of a tensile membrane structure.
arXiv Detail & Related papers (2022-11-30T11:11:56Z) - Investigating Ensemble Methods for Model Robustness Improvement of Text
Classifiers [66.36045164286854]
We analyze a set of existing bias features and demonstrate there is no single model that works best for all the cases.
By choosing an appropriate bias model, we can obtain a better robustness result than baselines with a more sophisticated model design.
arXiv Detail & Related papers (2022-10-28T17:52:10Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - FiLM-Ensemble: Probabilistic Deep Learning via Feature-wise Linear
Modulation [69.34011200590817]
We introduce FiLM-Ensemble, a deep, implicit ensemble method based on the concept of Feature-wise Linear Modulation.
By modulating the network activations of a single deep network with FiLM, one obtains a model ensemble with high diversity.
We show that FiLM-Ensemble outperforms other implicit ensemble methods, and it comes very close to the upper bound of an explicit ensemble of networks.
arXiv Detail & Related papers (2022-05-31T18:33:15Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z) - Fast Network Community Detection with Profile-Pseudo Likelihood Methods [19.639557431997037]
Most algorithms for fitting the block model likelihood function cannot scale to large-scale networks.
We propose a novel likelihood approach that decouples row and column labels in the likelihood function.
We show that our method provides strongly consistent estimates of the communities in a block model.
arXiv Detail & Related papers (2020-11-01T23:40:26Z) - End-to-End Training of CNN Ensembles for Person Re-Identification [0.0]
We propose an end-to-end ensemble method for person re-identification (ReID) to address the problem of overfitting in discriminative models.
Our proposed ensemble learning framework produces several diverse and accurate base learners in a single DenseNet.
Experiments on several benchmark datasets demonstrate that our method achieves state-of-the-art results.
arXiv Detail & Related papers (2020-10-03T12:40:13Z) - Detangling robustness in high dimensions: composite versus
model-averaged estimation [11.658462692891355]
Robust methods, though ubiquitous in practice, are yet to be fully understood in the context of regularized estimation and high dimensions.
This paper provides a toolbox to further study robustness in these settings and focuses on prediction.
arXiv Detail & Related papers (2020-06-12T20:40:15Z) - Amortized Bayesian model comparison with evidential deep learning [0.12314765641075436]
We propose a novel method for performing Bayesian model comparison using specialized deep learning architectures.
Our method is purely simulation-based and circumvents the step of explicitly fitting all alternative models under consideration to each observed dataset.
We show that our method achieves excellent results in terms of accuracy, calibration, and efficiency across the examples considered in this work.
arXiv Detail & Related papers (2020-04-22T15:15:46Z) - Deep Unfolding Network for Image Super-Resolution [159.50726840791697]
This paper proposes an end-to-end trainable unfolding network which leverages both learning-based methods and model-based methods.
The proposed network inherits the flexibility of model-based methods to super-resolve blurry, noisy images for different scale factors via a single model.
arXiv Detail & Related papers (2020-03-23T17:55:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.