GRVFL-MV: Graph Random Vector Functional Link Based on Multi-View Learning
- URL: http://arxiv.org/abs/2409.04743v2
- Date: Fri, 4 Oct 2024 09:05:45 GMT
- Title: GRVFL-MV: Graph Random Vector Functional Link Based on Multi-View Learning
- Authors: M. Tanveer, R. K. Sharma, M. Sajid, A. Quadir,
- Abstract summary: A novel graph random vector functional link based on multi-view learning (GRVFL-MV) model is proposed.
The proposed model is trained on multiple views, incorporating the concept of multiview learning (MVL)
It also incorporates the geometrical properties of all the views using the graph embedding (GE) framework.
- Score: 0.2999888908665658
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The classification performance of the random vector functional link (RVFL), a randomized neural network, has been widely acknowledged. However, due to its shallow learning nature, RVFL often fails to consider all the relevant information available in a dataset. Additionally, it overlooks the geometrical properties of the dataset. To address these limitations, a novel graph random vector functional link based on multi-view learning (GRVFL-MV) model is proposed. The proposed model is trained on multiple views, incorporating the concept of multiview learning (MVL), and it also incorporates the geometrical properties of all the views using the graph embedding (GE) framework. The fusion of RVFL networks, MVL, and GE framework enables our proposed model to achieve the following: i) efficient learning: by leveraging the topology of RVFL, our proposed model can efficiently capture nonlinear relationships within the multi-view data, facilitating efficient and accurate predictions; ii) comprehensive representation: fusing information from diverse perspectives enhance the proposed model's ability to capture complex patterns and relationships within the data, thereby improving the model's overall generalization performance; and iii) structural awareness: by employing the GE framework, our proposed model leverages the original data distribution of the dataset by naturally exploiting both intrinsic and penalty subspace learning criteria. The evaluation of the proposed GRVFL-MV model on various datasets, including 27 UCI and KEEL datasets, 50 datasets from Corel5k, and 45 datasets from AwA, demonstrates its superior performance compared to baseline models. These results highlight the enhanced generalization capabilities of the proposed GRVFL-MV model across a diverse range of datasets.
Related papers
- Scalable Weibull Graph Attention Autoencoder for Modeling Document Networks [50.42343781348247]
We develop a graph Poisson factor analysis (GPFA) which provides analytic conditional posteriors to improve the inference accuracy.
We also extend GPFA to a multi-stochastic-layer version named graph Poisson gamma belief network (GPGBN) to capture the hierarchical document relationships at multiple semantic levels.
Our models can extract high-quality hierarchical latent document representations and achieve promising performance on various graph analytic tasks.
arXiv Detail & Related papers (2024-10-13T02:22:14Z) - Multiview Random Vector Functional Link Network for Predicting DNA-Binding Proteins [0.0]
We propose a novel framework termed a multiview random vector functional link (MvRVFL) network, which fuses neural network architecture with multiview learning.
The proposed MvRVFL model combines the benefits of late and early fusion, allowing for distinct regularization parameters across different views.
The performance of the proposed MvRVFL model on the DBP dataset surpasses that of baseline models, demonstrating its superior effectiveness.
arXiv Detail & Related papers (2024-09-04T10:14:17Z) - A Framework for Fine-Tuning LLMs using Heterogeneous Feedback [69.51729152929413]
We present a framework for fine-tuning large language models (LLMs) using heterogeneous feedback.
First, we combine the heterogeneous feedback data into a single supervision format, compatible with methods like SFT and RLHF.
Next, given this unified feedback dataset, we extract a high-quality and diverse subset to obtain performance increases.
arXiv Detail & Related papers (2024-08-05T23:20:32Z) - StableLLaVA: Enhanced Visual Instruction Tuning with Synthesized
Image-Dialogue Data [129.92449761766025]
We propose a novel data collection methodology that synchronously synthesizes images and dialogues for visual instruction tuning.
This approach harnesses the power of generative models, marrying the abilities of ChatGPT and text-to-image generative models.
Our research includes comprehensive experiments conducted on various datasets.
arXiv Detail & Related papers (2023-08-20T12:43:52Z) - FedPerfix: Towards Partial Model Personalization of Vision Transformers
in Federated Learning [9.950367271170592]
We investigate where and how to partially personalize a Vision Transformers (ViT) model.
Based on the insights that the self-attention layer and the classification head are the most sensitive parts of a ViT, we propose a novel approach called FedPerfix.
We evaluate the proposed approach on CIFAR-100, OrganAMNIST, and Office-Home datasets and demonstrate its effectiveness compared to several advanced PFL methods.
arXiv Detail & Related papers (2023-08-17T19:22:30Z) - Challenging the Myth of Graph Collaborative Filtering: a Reasoned and Reproducibility-driven Analysis [50.972595036856035]
We present a code that successfully replicates results from six popular and recent graph recommendation models.
We compare these graph models with traditional collaborative filtering models that historically performed well in offline evaluations.
By investigating the information flow from users' neighborhoods, we aim to identify which models are influenced by intrinsic features in the dataset structure.
arXiv Detail & Related papers (2023-08-01T09:31:44Z) - T1: Scaling Diffusion Probabilistic Fields to High-Resolution on Unified
Visual Modalities [69.16656086708291]
Diffusion Probabilistic Field (DPF) models the distribution of continuous functions defined over metric spaces.
We propose a new model comprising of a view-wise sampling algorithm to focus on local structure learning.
The model can be scaled to generate high-resolution data while unifying multiple modalities.
arXiv Detail & Related papers (2023-05-24T03:32:03Z) - Variational Flow Graphical Model [22.610974083362606]
Variational Graphical Flow (VFG) Model learns the representation of high dimensional data via a message-passing scheme.
VFGs produce a representation of the data using a lower dimension, thus overcoming the drawbacks of many flow-based models.
In experiments, VFGs achieves improved evidence lower bound (ELBO) and likelihood values on multiple datasets.
arXiv Detail & Related papers (2022-07-06T14:51:03Z) - Relational VAE: A Continuous Latent Variable Model for Graph Structured
Data [0.0]
We show applications on the problem of structured probability density modeling for simulated and real wind farm monitoring data.
We release the source code, along with the simulated datasets.
arXiv Detail & Related papers (2021-06-30T13:24:27Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.