Generalized Factor Neural Network Model for High-dimensional Regression
- URL: http://arxiv.org/abs/2502.11310v2
- Date: Thu, 13 Mar 2025 19:10:17 GMT
- Title: Generalized Factor Neural Network Model for High-dimensional Regression
- Authors: Zichuan Guo, Mihai Cucuringu, Alexander Y. Shestopaloff,
- Abstract summary: We tackle the challenges of modeling high-dimensional data sets with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships.<n>Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression.
- Score: 50.554377879576066
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We tackle the challenges of modeling high-dimensional data sets, particularly those with latent low-dimensional structures hidden within complex, non-linear, and noisy relationships. Our approach enables a seamless integration of concepts from non-parametric regression, factor models, and neural networks for high-dimensional regression. Our approach introduces PCA and Soft PCA layers, which can be embedded at any stage of a neural network architecture, allowing the model to alternate between factor modeling and non-linear transformations. This flexibility makes our method especially effective for processing hierarchical compositional data. We explore ours and other techniques for imposing low-rank structures on neural networks and examine how architectural design impacts model performance. The effectiveness of our method is demonstrated through simulation studies, as well as applications to forecasting future price movements of equity ETF indices and nowcasting with macroeconomic data.
Related papers
- Instruction-Guided Autoregressive Neural Network Parameter Generation [49.800239140036496]
We propose IGPG, an autoregressive framework that unifies parameter synthesis across diverse tasks and architectures.
By autoregressively generating neural network weights' tokens, IGPG ensures inter-layer coherence and enables efficient adaptation across models and datasets.
Experiments on multiple datasets demonstrate that IGPG consolidates diverse pretrained models into a single, flexible generative framework.
arXiv Detail & Related papers (2025-04-02T05:50:19Z) - Multiscale Analysis of Woven Composites Using Hierarchical Physically Recurrent Neural Networks [0.0]
Multiscale homogenization of woven composites requires detailed micromechanical evaluations.
This study introduces a Hierarchical Physically Recurrent Neural Network (HPRNN) employing two levels of surrogate modeling.
arXiv Detail & Related papers (2025-03-06T19:02:32Z) - A data augmentation strategy for deep neural networks with application to epidemic modelling [2.4537195774258556]
We present a proof of concept demonstrating the application of data-driven methods and deep neural networks to a recently introduced SIR-type model.
Our results show that a robust data augmentation strategy trough suitable data-driven models can improve the reliability of Feed-Forward Neural Networks (FNNs) and Autoregressive Networks (NARs)
This approach enhances the ability to handle nonlinear dynamics and offers scalable, data-driven solutions for epidemic forecasting.
arXiv Detail & Related papers (2025-02-28T13:24:49Z) - A Survey of Model Architectures in Information Retrieval [64.75808744228067]
We focus on two key aspects: backbone models for feature extraction and end-to-end system architectures for relevance estimation.
We trace the development from traditional term-based methods to modern neural approaches, particularly highlighting the impact of transformer-based models and subsequent large language models (LLMs)
We conclude by discussing emerging challenges and future directions, including architectural optimizations for performance and scalability, handling of multimodal, multilingual data, and adaptation to novel application domains beyond traditional search paradigms.
arXiv Detail & Related papers (2025-02-20T18:42:58Z) - A Riemannian Framework for Learning Reduced-order Lagrangian Dynamics [18.151022395233152]
We propose a novel geometric network architecture to learn physically-consistent reduced-order dynamic parameters.<n>Our approach enables accurate long-term predictions of the high-dimensional dynamics of rigid and deformable systems.
arXiv Detail & Related papers (2024-10-24T15:53:21Z) - Physics-Informed Machine Learning for Seismic Response Prediction OF Nonlinear Steel Moment Resisting Frame Structures [6.483318568088176]
PiML method integrates scientific principles and physical laws into deep neural networks to model seismic responses of nonlinear structures.
Manipulating the equation of motion helps learn system nonlinearities and confines solutions within physically interpretable results.
Result handles complex data better than existing physics-guided LSTM models and outperforms other non-physics data-driven networks.
arXiv Detail & Related papers (2024-02-28T02:16:03Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Application of Clustering Algorithms for Dimensionality Reduction in
Infrastructure Resilience Prediction Models [4.350783459690612]
We present a clustering-based method that simultaneously minimizes the problem of high-dimensionality and improves the prediction accuracy of machine learning models.
The proposed method can be used to develop decision-support tools for post-disaster recovery of infrastructure networks.
arXiv Detail & Related papers (2022-05-06T15:51:05Z) - Sparse Flows: Pruning Continuous-depth Models [107.98191032466544]
We show that pruning improves generalization for neural ODEs in generative modeling.
We also show that pruning finds minimal and efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.
arXiv Detail & Related papers (2021-06-24T01:40:17Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.