AIR-Net: Adaptive and Implicit Regularization Neural Network for Matrix
Completion
- URL: http://arxiv.org/abs/2110.07557v1
- Date: Tue, 12 Oct 2021 04:13:33 GMT
- Title: AIR-Net: Adaptive and Implicit Regularization Neural Network for Matrix
Completion
- Authors: Zhemin Li, Hongxia Wang
- Abstract summary: This work combines adaptive and implicit low-rank regularization that captures the prior dynamically according to the current recovered matrix.
Theoretical analyses show that the adaptive part of the AIR-Net enhances implicit regularization.
With complete flexibility to select neural networks for matrix representation, AIR-Net can be extended to solve more general inverse problems.
- Score: 6.846820212701818
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventionally, the matrix completion (MC) model aims to recover a matrix
from partially observed elements. Accurate recovery necessarily requires a
regularization encoding priors of the unknown matrix/signal properly. However,
encoding the priors accurately for the complex natural signal is difficult, and
even then, the model might not generalize well outside the particular matrix
type. This work combines adaptive and implicit low-rank regularization that
captures the prior dynamically according to the current recovered matrix.
Furthermore, we aim to answer the question: how does adaptive regularization
affect implicit regularization? We utilize neural networks to represent
Adaptive and Implicit Regularization and named the proposed model
\textit{AIR-Net}. Theoretical analyses show that the adaptive part of the
AIR-Net enhances implicit regularization. In addition, the adaptive regularizer
vanishes at the end, thus can avoid saturation issues. Numerical experiments
for various data demonstrate the effectiveness of AIR-Net, especially when the
locations of missing elements are not randomly chosen. With complete
flexibility to select neural networks for matrix representation, AIR-Net can be
extended to solve more general inverse problems.
Related papers
- Unsupervised Adaptive Normalization [0.07499722271664146]
Unsupervised Adaptive Normalization (UAN) is an innovative algorithm that seamlessly integrates clustering for normalization with deep neural network learning.
UAN outperforms the classical methods by adapting to the target task and is effective in classification, and domain adaptation.
arXiv Detail & Related papers (2024-09-07T08:14:11Z) - Matrix Completion via Nonsmooth Regularization of Fully Connected Neural Networks [7.349727826230864]
It has been shown that enhanced performance could be attained by using nonlinear estimators such as deep neural networks.
In this paper, we control over-fitting by regularizing FCNN model in terms of norm intermediate representations.
Our simulations indicate the superiority of the proposed algorithm in comparison with existing linear and nonlinear algorithms.
arXiv Detail & Related papers (2024-03-15T12:00:37Z) - Sparse-Input Neural Network using Group Concave Regularization [10.103025766129006]
Simultaneous feature selection and non-linear function estimation are challenging in neural networks.
We propose a framework of sparse-input neural networks using group concave regularization for feature selection in both low-dimensional and high-dimensional settings.
arXiv Detail & Related papers (2023-07-01T13:47:09Z) - Graph Polynomial Convolution Models for Node Classification of
Non-Homophilous Graphs [52.52570805621925]
We investigate efficient learning from higher-order graph convolution and learning directly from adjacency matrix for node classification.
We show that the resulting model lead to new graphs and residual scaling parameter.
We demonstrate that the proposed methods obtain improved accuracy for node-classification of non-homophilous parameters.
arXiv Detail & Related papers (2022-09-12T04:46:55Z) - Adaptive and Implicit Regularization for Matrix Completion [17.96984956202579]
This paper proposes a new adaptive and implicit low-rank regularization that captures the low-rank prior dynamically from the training data.
We show that the adaptive regularization of ReTwoAIR enhances the implicit regularization and vanishes at the end of training.
We validate AIR's effectiveness on various benchmark tasks, indicating that the AIR is particularly favorable for the scenarios when the missing entries are non-uniform.
arXiv Detail & Related papers (2022-08-11T05:00:58Z) - Efficient Semantic Image Synthesis via Class-Adaptive Normalization [116.63715955932174]
Class-adaptive normalization (CLADE) is a lightweight but equally-effective variant that is only adaptive to semantic class.
We introduce intra-class positional map encoding calculated from semantic layouts to modulate the normalization parameters of CLADE.
The proposed CLADE can be generalized to different SPADE-based methods while achieving comparable generation quality compared to SPADE.
arXiv Detail & Related papers (2020-12-08T18:59:32Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Controllable Orthogonalization in Training DNNs [96.1365404059924]
Orthogonality is widely used for training deep neural networks (DNNs) due to its ability to maintain all singular values of the Jacobian close to 1.
This paper proposes a computationally efficient and numerically stable orthogonalization method using Newton's iteration (ONI)
We show that our method improves the performance of image classification networks by effectively controlling the orthogonality to provide an optimal tradeoff between optimization benefits and representational capacity reduction.
We also show that ONI stabilizes the training of generative adversarial networks (GANs) by maintaining the Lipschitz continuity of a network, similar to spectral normalization (
arXiv Detail & Related papers (2020-04-02T10:14:27Z) - Multi-Objective Matrix Normalization for Fine-grained Visual Recognition [153.49014114484424]
Bilinear pooling achieves great success in fine-grained visual recognition (FGVC)
Recent methods have shown that the matrix power normalization can stabilize the second-order information in bilinear features.
We propose an efficient Multi-Objective Matrix Normalization (MOMN) method that can simultaneously normalize a bilinear representation.
arXiv Detail & Related papers (2020-03-30T08:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.