An Adaptive Alternating-direction-method-based Nonnegative Latent Factor
Model
- URL: http://arxiv.org/abs/2204.04843v1
- Date: Mon, 11 Apr 2022 03:04:26 GMT
- Title: An Adaptive Alternating-direction-method-based Nonnegative Latent Factor
Model
- Authors: Yurong Zhong and Xin Luo
- Abstract summary: An alternating-direction-method-based nonnegative latent factor model can perform efficient representation learning to a high-dimensional and incomplete (HDI) matrix.
This paper proposes an Adaptive Alternating-direction-method-based Nonnegative Latent Factor model, whose hyper- parameter adaptation is implemented following the principle of particle swarm optimization.
Empirical studies on nonnegative HDI matrices generated by industrial applications indicate that A2NLF outperforms several state-of-the-art models in terms of computational and storage efficiency, as well as maintains highly competitive estimation accuracy for an HDI matrix's missing data
- Score: 2.857044909410376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An alternating-direction-method-based nonnegative latent factor model can
perform efficient representation learning to a high-dimensional and incomplete
(HDI) matrix. However, it introduces multiple hyper-parameters into the
learning process, which should be chosen with care to enable its superior
performance. Its hyper-parameter adaptation is desired for further enhancing
its scalability. Targeting at this issue, this paper proposes an Adaptive
Alternating-direction-method-based Nonnegative Latent Factor (A2NLF) model,
whose hyper-parameter adaptation is implemented following the principle of
particle swarm optimization. Empirical studies on nonnegative HDI matrices
generated by industrial applications indicate that A2NLF outperforms several
state-of-the-art models in terms of computational and storage efficiency, as
well as maintains highly competitive estimation accuracy for an HDI matrix's
missing data.
Related papers
- Spectrum-Aware Parameter Efficient Fine-Tuning for Diffusion Models [73.88009808326387]
We propose a novel spectrum-aware adaptation framework for generative models.
Our method adjusts both singular values and their basis vectors of pretrained weights.
We introduce Spectral Ortho Decomposition Adaptation (SODA), which balances computational efficiency and representation capacity.
arXiv Detail & Related papers (2024-05-31T17:43:35Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Active-Learning-Driven Surrogate Modeling for Efficient Simulation of
Parametric Nonlinear Systems [0.0]
In absence of governing equations, we need to construct the parametric reduced-order surrogate model in a non-intrusive fashion.
Our work provides a non-intrusive optimality criterion to efficiently populate the parameter snapshots.
We propose an active-learning-driven surrogate model using kernel-based shallow neural networks.
arXiv Detail & Related papers (2023-06-09T18:01:14Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - A Practical Second-order Latent Factor Model via Distributed Particle
Swarm Optimization [5.199454801210509]
Hessian-free (HF) optimization is an efficient method to utilizing second-order information of an LF model's objective function.
A practical SLF (PSLF) model is proposed in this work.
Experiments on real HiDS data sets indicate that PSLF model has a competitive advantage over state-of-the-art models in data representation ability.
arXiv Detail & Related papers (2022-08-12T05:49:08Z) - Extension of Dynamic Mode Decomposition for dynamic systems with
incomplete information based on t-model of optimal prediction [69.81996031777717]
The Dynamic Mode Decomposition has proved to be a very efficient technique to study dynamic data.
The application of this approach becomes problematic if the available data is incomplete because some dimensions of smaller scale either missing or unmeasured.
We consider a first-order approximation of the Mori-Zwanzig decomposition, state the corresponding optimization problem and solve it with the gradient-based optimization method.
arXiv Detail & Related papers (2022-02-23T11:23:59Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Model-based Clustering using Automatic Differentiation: Confronting
Misspecification and High-Dimensional Data [6.053629733936546]
We study two practically important cases of model based clustering using Gaussian Mixture Models.
We show that EM has better clustering performance, measured by Adjusted Rand Index, compared to Gradient Descent in cases of misspecification.
We propose a new penalty term for the likelihood based on the Kullback Leibler divergence between pairs of fitted components.
arXiv Detail & Related papers (2020-07-08T10:56:05Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.