Enhancing Multiview Synergy: Robust Learning by Exploiting the Wave Loss Function with Consensus and Complementarity Principles
- URL: http://arxiv.org/abs/2408.06819v1
- Date: Tue, 13 Aug 2024 11:25:22 GMT
- Title: Enhancing Multiview Synergy: Robust Learning by Exploiting the Wave Loss Function with Consensus and Complementarity Principles
- Authors: A. Quadir, Mushir Akhtar, M. Tanveer,
- Abstract summary: This paper introduces Wave-MvSVM, a novel multiview support vector machine framework leveraging the wave loss (W-loss) function.
Wave-MvSVM ensures a more comprehensive and resilient learning process by integrating both consensus and complementarity principles.
Extensive empirical evaluations across diverse datasets demonstrate the superior performance of Wave-MvSVM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multiview learning (MvL) is an advancing domain in machine learning, leveraging multiple data perspectives to enhance model performance through view-consistency and view-discrepancy. Despite numerous successful multiview-based SVM models, existing frameworks predominantly focus on the consensus principle, often overlooking the complementarity principle. Furthermore, they exhibit limited robustness against noisy, error-prone, and view-inconsistent samples, prevalent in multiview datasets. To tackle the aforementioned limitations, this paper introduces Wave-MvSVM, a novel multiview support vector machine framework leveraging the wave loss (W-loss) function, specifically designed to harness both consensus and complementarity principles. Unlike traditional approaches that often overlook the complementary information among different views, the proposed Wave-MvSVM ensures a more comprehensive and resilient learning process by integrating both principles effectively. The W-loss function, characterized by its smoothness, asymmetry, and bounded nature, is particularly effective in mitigating the adverse effects of noisy and outlier data, thereby enhancing model stability. Theoretically, the W-loss function also exhibits a crucial classification-calibrated property, further boosting its effectiveness. Wave-MvSVM employs a between-view co-regularization term to enforce view consistency and utilizes an adaptive combination weight strategy to maximize the discriminative power of each view. The optimization problem is efficiently solved using a combination of GD and the ADMM, ensuring reliable convergence to optimal solutions. Theoretical analyses, grounded in Rademacher complexity, validate the generalization capabilities of the Wave-MvSVM model. Extensive empirical evaluations across diverse datasets demonstrate the superior performance of Wave-MvSVM in comparison to existing benchmark models.
Related papers
- On-the-fly Modulation for Balanced Multimodal Learning [53.616094855778954]
Multimodal learning is expected to boost model performance by integrating information from different modalities.
The widely-used joint training strategy leads to imbalanced and under-optimized uni-modal representations.
We propose On-the-fly Prediction Modulation (OPM) and On-the-fly Gradient Modulation (OGM) strategies to modulate the optimization of each modality.
arXiv Detail & Related papers (2024-10-15T13:15:50Z) - On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning [85.75164588939185]
We study the discriminative probabilistic modeling problem on a continuous domain for (multimodal) self-supervised representation learning.
We conduct generalization error analysis to reveal the limitation of current InfoNCE-based contrastive loss for self-supervised representation learning.
arXiv Detail & Related papers (2024-10-11T18:02:46Z) - Preserving Multi-Modal Capabilities of Pre-trained VLMs for Improving Vision-Linguistic Compositionality [69.76121008898677]
Fine-grained Selective Calibrated CLIP integrates local hard negative loss and selective calibrated regularization.
Our evaluations show that FSC-CLIP not only achieves compositionality on par with state-of-the-art models but also retains strong multi-modal capabilities.
arXiv Detail & Related papers (2024-10-07T17:16:20Z) - Multiview learning with twin parametric margin SVM [0.0]
Multiview learning (MVL) seeks to leverage the benefits of diverse perspectives to complement each other.
We propose multiview twin parametric margin support vector machine (MvTPMSVM)
MvTPMSVM constructs parametric margin hyperplanes corresponding to both classes, aiming to regulate and manage the impact of the heteroscedastic noise structure.
arXiv Detail & Related papers (2024-08-04T10:16:11Z) - Advancing Supervised Learning with the Wave Loss Function: A Robust and Smooth Approach [0.0]
We present a novel contribution to the realm of supervised machine learning: an asymmetric loss function named wave loss.
We incorporate the proposed wave loss function into the least squares setting of support vector machines (SVM) and twin support vector machines (TSVM)
To empirically showcase the effectiveness of the proposed Wave-SVM and Wave-TSVM, we evaluate them on benchmark UCI and KEEL datasets.
arXiv Detail & Related papers (2024-04-28T07:32:00Z) - Enhancing Fairness and Performance in Machine Learning Models: A Multi-Task Learning Approach with Monte-Carlo Dropout and Pareto Optimality [1.5498930424110338]
This study introduces an approach to mitigate bias in machine learning by leveraging model uncertainty.
Our approach utilizes a multi-task learning (MTL) framework combined with Monte Carlo (MC) Dropout to assess and mitigate uncertainty in predictions related to protected labels.
arXiv Detail & Related papers (2024-04-12T04:17:50Z) - Mutual Information Regularization for Weakly-supervised RGB-D Salient
Object Detection [33.210575826086654]
We present a weakly-supervised RGB-D salient object detection model via supervision.
We focus on effective multimodal representation learning via inter-modal mutual information regularization.
arXiv Detail & Related papers (2023-06-06T12:36:57Z) - Trusted Multi-View Classification with Dynamic Evidential Fusion [73.35990456162745]
We propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC)
TMC provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
arXiv Detail & Related papers (2022-04-25T03:48:49Z) - Trustworthy Multimodal Regression with Mixture of Normal-inverse Gamma
Distributions [91.63716984911278]
We introduce a novel Mixture of Normal-Inverse Gamma distributions (MoNIG) algorithm, which efficiently estimates uncertainty in principle for adaptive integration of different modalities and produces a trustworthy regression result.
Experimental results on both synthetic and different real-world data demonstrate the effectiveness and trustworthiness of our method on various multimodal regression tasks.
arXiv Detail & Related papers (2021-11-11T14:28:12Z) - A Variational Information Bottleneck Approach to Multi-Omics Data
Integration [98.6475134630792]
We propose a deep variational information bottleneck (IB) approach for incomplete multi-view observations.
Our method applies the IB framework on marginal and joint representations of the observed views to focus on intra-view and inter-view interactions that are relevant for the target.
Experiments on real-world datasets show that our method consistently achieves gain from data integration and outperforms state-of-the-art benchmarks.
arXiv Detail & Related papers (2021-02-05T06:05:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.