Adaptive Weighted LSSVM for Multi-View Classification
- URL: http://arxiv.org/abs/2512.02653v1
- Date: Tue, 02 Dec 2025 11:14:47 GMT
- Title: Adaptive Weighted LSSVM for Multi-View Classification
- Authors: Farnaz Faramarzi Lighvan, Mehrdad Asadi, Lynn Houthuys,
- Abstract summary: AW-LSSVM promotes complementary learning by an iterative global coupling to make each view focus on hard samples of others from previous iterations.<n>Experiments demonstrate AW-LSSVM outperforms existing kernel-based multi-view methods on most datasets.
- Score: 0.5161531917413708
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-view learning integrates diverse representations of the same instances to improve performance. Most existing kernel-based multi-view learning methods use fusion techniques without enforcing an explicit collaboration type across views or co-regularization which limits global collaboration. We propose AW-LSSVM, an adaptive weighted LS-SVM that promotes complementary learning by an iterative global coupling to make each view focus on hard samples of others from previous iterations. Experiments demonstrate that AW-LSSVM outperforms existing kernel-based multi-view methods on most datasets, while keeping raw features isolated, making it also suitable for privacy-preserving scenarios.
Related papers
- Self-Supervised Learning with a Multi-Task Latent Space Objective [71.49269645849675]
Self-supervised learning (SSL) methods learn visual representations by aligning different views of the same image.<n>We show that assigning a separate predictor to each view type stabilizes multi-crop training, resulting in significant performance gains.<n>This yields a simple multi-task formulation of asymmetric Siamese SSL that combines global, local, and masked views into a single framework.
arXiv Detail & Related papers (2026-02-05T16:33:30Z) - Multi-view mid fusion: a universal approach for learning in an HDLSS setting [0.0]
This paper introduces a universal approach for learning in HDLSS setting using multi-view mid fusion techniques.<n>It shows how existing mid fusion multi-view methods perform well in an HDLSS setting even if no inherent views are provided.
arXiv Detail & Related papers (2025-07-08T14:31:53Z) - Robust Multi-View Learning via Representation Fusion of Sample-Level Attention and Alignment of Simulated Perturbation [61.64052577026623]
Real-world multi-view datasets are often heterogeneous and imperfect.<n>We propose a novel robust MVL method (namely RML) with simultaneous representation fusion and alignment.<n>Our RML is self-supervised and can also be applied for downstream tasks as a regularization.
arXiv Detail & Related papers (2025-03-06T07:01:08Z) - Balanced Multi-view Clustering [56.17836963920012]
Multi-view clustering (MvC) aims to integrate information from different views to enhance the capability of the model in capturing the underlying data structures.<n>The widely used joint training paradigm in MvC is potentially not fully leverage the multi-view information.<n>We propose a novel balanced multi-view clustering (BMvC) method, which introduces a view-specific contrastive regularization (VCR) to modulate the optimization of each view.
arXiv Detail & Related papers (2025-01-05T14:42:47Z) - A Novel Approach for Effective Multi-View Clustering with
Information-Theoretic Perspective [24.630259061774836]
This study presents a new approach called Sufficient Multi-View Clustering (SUMVC) that examines the multi-view clustering framework from an information-theoretic standpoint.
Firstly, we develop a simple and reliable multi-view clustering method SCMVC that employs variational analysis to generate consistent information.
Secondly, we propose a sufficient representation lower bound to enhance consistent information and minimise unnecessary information among views.
arXiv Detail & Related papers (2023-09-25T09:41:11Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Self-Learning Symmetric Multi-view Probabilistic Clustering [35.96327818838784]
Multi-view Clustering (MVC) has achieved significant progress, with many efforts dedicated to learn knowledge from multiple views.
Most existing methods are either not applicable or require additional steps for incomplete MVC.
We propose a novel unified framework for incomplete and complete MVC named self-learning symmetric multi-view probabilistic clustering.
arXiv Detail & Related papers (2023-05-12T08:27:03Z) - Learning Visual Representation from Modality-Shared Contrastive
Language-Image Pre-training [88.80694147730883]
We investigate a variety of Modality-Shared Contrastive Language-Image Pre-training (MS-CLIP) frameworks.
In studied conditions, we observe that a mostly unified encoder for vision and language signals outperforms all other variations that separate more parameters.
Our approach outperforms vanilla CLIP by 1.6 points in linear probing on a collection of 24 downstream vision tasks.
arXiv Detail & Related papers (2022-07-26T05:19:16Z) - Multi-view Subspace Adaptive Learning via Autoencoder and Attention [3.8574404853067215]
We propose a new Multiview Subspace Adaptive Learning based on Attention and Autoencoder (MSALAA)
This method combines a deep autoencoder and a method for aligning the self-representations of various views.
We empirically observe significant improvement over existing baseline methods on six real-life datasets.
arXiv Detail & Related papers (2022-01-01T11:31:52Z) - Embedded Deep Bilinear Interactive Information and Selective Fusion for
Multi-view Learning [70.67092105994598]
We propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects.
In particular, we train different deep neural networks to learn various intra-view representations.
Experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-13T01:13:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.