Invertible DenseNets
- URL: http://arxiv.org/abs/2010.02125v3
- Date: Fri, 8 Jan 2021 18:33:17 GMT
- Title: Invertible DenseNets
- Authors: Yura Perugachi-Diaz, Jakub M. Tomczak, Sandjai Bhulai
- Abstract summary: We introduce Invertible Dense Networks (i-DenseNets) as a more efficient alternative to Residual Flows.
We demonstrate the performance of i-DenseNets and Residual Flows on toy, MNIST, and CIFAR10 data.
- Score: 7.895232155155041
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce Invertible Dense Networks (i-DenseNets), a more parameter
efficient alternative to Residual Flows. The method relies on an analysis of
the Lipschitz continuity of the concatenation in DenseNets, where we enforce
the invertibility of the network by satisfying the Lipschitz constraint.
Additionally, we extend this method by proposing a learnable concatenation,
which not only improves the model performance but also indicates the importance
of the concatenated representation. We demonstrate the performance of
i-DenseNets and Residual Flows on toy, MNIST, and CIFAR10 data. Both
i-DenseNets outperform Residual Flows evaluated in negative log-likelihood, on
all considered datasets under an equal parameter budget.
Related papers
- Investigating the Effect of Network Pruning on Performance and Interpretability [0.0]
We investigate the impact of different pruning techniques on the classification performance and interpretability of GoogLeNet.
We compare different retraining strategies, such as iterative pruning and one-shot pruning.
We find that with sufficient retraining epochs, the performance of the networks can approximate the performance of the default GoogLeNet.
arXiv Detail & Related papers (2024-09-29T14:57:45Z) - Baking Symmetry into GFlowNets [58.932776403471635]
GFlowNets have exhibited promising performance in generating diverse candidates with high rewards.
This study aims to integrate symmetries into GFlowNets by identifying equivalent actions during the generation process.
arXiv Detail & Related papers (2024-06-08T10:11:10Z) - Bifurcated Generative Flow Networks [32.40020432840822]
Bifurcated GFlowNets (BN) are a novel approach to factorize the flows into separate representations for state flows and edge-based flow allocation.
We show that BN significantly improves learning efficiency and effectiveness compared to strong baselines.
arXiv Detail & Related papers (2024-06-04T02:12:27Z) - Model Merging by Uncertainty-Based Gradient Matching [70.54580972266096]
We propose a new uncertainty-based scheme to improve the performance by reducing the mismatch.
Our new method gives consistent improvements for large language models and vision transformers.
arXiv Detail & Related papers (2023-10-19T15:02:45Z) - DDG-Net: Discriminability-Driven Graph Network for Weakly-supervised
Temporal Action Localization [40.521076622370806]
We propose Discriminability-Driven Graph Network (DDG-Net), which explicitly models ambiguous snippets and discriminative snippets with well-designed connections.
Experiments on THUMOS14 and ActivityNet1.2 benchmarks demonstrate the effectiveness of DDG-Net.
arXiv Detail & Related papers (2023-07-31T05:48:39Z) - Regularization of polynomial networks for image recognition [78.4786845859205]
Polynomial Networks (PNs) have emerged as an alternative method with a promising performance and improved interpretability.
We introduce a class of PNs, which are able to reach the performance of ResNet across a range of six benchmarks.
arXiv Detail & Related papers (2023-03-24T10:05:22Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Manifold Regularized Dynamic Network Pruning [102.24146031250034]
This paper proposes a new paradigm that dynamically removes redundant filters by embedding the manifold information of all instances into the space of pruned networks.
The effectiveness of the proposed method is verified on several benchmarks, which shows better performance in terms of both accuracy and computational cost.
arXiv Detail & Related papers (2021-03-10T03:59:03Z) - Invertible DenseNets with Concatenated LipSwish [8.315801422499861]
We introduce Invertible Dense Networks (i-DenseNets) as a more efficient alternative to Residual Flows.
We show that the proposed model out-performs Residual Flows when trained as a hybrid model where the model is both a generative and a discriminative model.
arXiv Detail & Related papers (2021-02-04T15:45:33Z) - Cascade Network with Guided Loss and Hybrid Attention for Finding Good
Correspondences [33.65360396430535]
Given a putative correspondence set of an image pair, we propose a neural network which finds correct correspondences by a binary-class classifier.
We propose a new Guided Loss that can directly use evaluation criterion (Fn-measure) as guidance to dynamically adjust the objective function.
We then propose a hybrid attention block to extract feature, which integrates the Bayesian context normalization (BACN) and channel-wise attention (CA)
arXiv Detail & Related papers (2021-01-31T08:33:20Z) - Toward fast and accurate human pose estimation via soft-gated skip
connections [97.06882200076096]
This paper is on highly accurate and highly efficient human pose estimation.
We re-analyze this design choice in the context of improving both the accuracy and the efficiency over the state-of-the-art.
Our model achieves state-of-the-art results on the MPII and LSP datasets.
arXiv Detail & Related papers (2020-02-25T18:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.