Fast and Accurate Power Load Data Completion via Regularization-optimized Low-Rank Factorization
- URL: http://arxiv.org/abs/2505.19133v2
- Date: Sun, 22 Jun 2025 20:41:34 GMT
- Title: Fast and Accurate Power Load Data Completion via Regularization-optimized Low-Rank Factorization
- Authors: Yan Xia, Hao Feng, Hongwei Sun, Junjie Wang, Qicong Hu,
- Abstract summary: Low-rank representation learning has emerged as a powerful tool for recovering missing values in power load data.<n>Regularizationtemporal Low-Rank Factorization model is favoured for its efficiency and interpretability.<n>We propose a Regularizationtemporal Low-Rank Factorization controller, which adapts to adjust the regularization coefficient.
- Score: 10.713082490316111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Low-rank representation learning has emerged as a powerful tool for recovering missing values in power load data due to its ability to exploit the inherent low-dimensional structures of spatiotemporal measurements. Among various techniques, low-rank factorization models are favoured for their efficiency and interpretability. However, their performance is highly sensitive to the choice of regularization parameters, which are typically fixed or manually tuned, resulting in limited generalization capability or slow convergence in practical scenarios. In this paper, we propose a Regularization-optimized Low-Rank Factorization, which introduces a Proportional-Integral-Derivative controller to adaptively adjust the regularization coefficient. Furthermore, we provide a detailed algorithmic complexity analysis, showing that our method preserves the computational efficiency of stochastic gradient descent while improving adaptivity. Experimental results on real-world power load datasets validate the superiority of our method in both imputation accuracy and training efficiency compared to existing baselines.
Related papers
- A kinetic-based regularization method for data science applications [39.58317527488534]
We propose a physics-based regularization technique for function learning, inspired by statistical mechanics.<n>By drawing an analogy between optimizing the parameters of an interpolator and minimizing the energy of a system, we introduce constraints on the lower-order moments of the data distribution.<n>This minimizes the discrepancy between the discrete and continuum representations of the data, in turn allowing to access more favorable energy landscapes.
arXiv Detail & Related papers (2025-03-06T08:12:01Z) - Bilevel Learning with Inexact Stochastic Gradients [2.247833425312671]
Bilevel learning has gained prominence in machine learning, inverse problems, and imaging applications.<n>The large-scale nature of these problems has led to the development of inexact and computationally efficient methods.
arXiv Detail & Related papers (2024-12-16T18:18:47Z) - Gradient Descent Efficiency Index [0.0]
This study introduces a new efficiency metric, Ek, designed to quantify the effectiveness of each iteration.
The proposed metric accounts for both the relative change in error and the stability of the loss function across iterations.
Ek has the potential to guide more informed decisions in the selection and tuning of optimization algorithms in machine learning applications.
arXiv Detail & Related papers (2024-10-25T10:22:22Z) - Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis [63.66763657191476]
We show that efficient numerical training and inference algorithms as low-rank computation have impressive performance for learning Transformer-based adaption.
We analyze how magnitude-based models affect generalization while improving adaption.
We conclude that proper magnitude-based has a slight on the testing performance.
arXiv Detail & Related papers (2024-06-24T23:00:58Z) - Adaptive debiased SGD in high-dimensional GLMs with streaming data [4.704144189806667]
This paper introduces a novel approach to online inference in high-dimensional generalized linear models.<n>Our method operates in a single-pass mode, making it different from existing methods that require full dataset access or large-dimensional summary statistics storage.<n>The core of our methodological innovation lies in an adaptive descent algorithm tailored for dynamic objective functions, coupled with a novel online debiasing procedure.
arXiv Detail & Related papers (2024-05-28T15:36:48Z) - Adaptive pruning-based Newton's method for distributed learning [14.885388389215587]
This paper presents a novel and efficient algorithm named Distributed Adaptive Newton Learning (textttDANL)<n>textttDANL attains a linear convergence rate while efficiently adapting to available resources and keeping high efficiency.<n>Experiments demonstrate that textttDANL achieves linear convergence with efficient communication and strong performance across different datasets.
arXiv Detail & Related papers (2023-08-20T04:01:30Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z) - Self Normalizing Flows [65.73510214694987]
We propose a flexible framework for training normalizing flows by replacing expensive terms in the gradient by learned approximate inverses at each layer.
This reduces the computational complexity of each layer's exact update from $mathcalO(D3)$ to $mathcalO(D2)$.
We show experimentally that such models are remarkably stable and optimize to similar data likelihood values as their exact gradient counterparts.
arXiv Detail & Related papers (2020-11-14T09:51:51Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.