Bayesian learning of feature spaces for multitasks problems
- URL: http://arxiv.org/abs/2209.03028v2
- Date: Fri, 3 Nov 2023 10:13:50 GMT
- Title: Bayesian learning of feature spaces for multitasks problems
- Authors: Carlos Sevilla-Salcedo, Ascensi\'on Gallardo-Antol\'in, Vanessa
G\'omez-Verdejo, Emilio Parrado-Hern\'andez
- Abstract summary: This paper introduces a novel approach for multi-task regression that connects Kernel Machines (KMs) and Extreme Learning Machines (ELMs)
The proposed models, termed RFF-BLR, stand on a Bayesian framework that simultaneously addresses two main design goals.
The experimental results show that this framework can lead to significant performance improvements compared to the state-of-the-art methods in nonlinear regression.
- Score: 0.11538034264098687
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel approach for multi-task regression that
connects Kernel Machines (KMs) and Extreme Learning Machines (ELMs) through the
exploitation of the Random Fourier Features (RFFs) approximation of the RBF
kernel. In this sense, one of the contributions of this paper shows that for
the proposed models, the KM and the ELM formulations can be regarded as two
sides of the same coin. These proposed models, termed RFF-BLR, stand on a
Bayesian framework that simultaneously addresses two main design goals. On the
one hand, it fits multitask regressors based on KMs endowed with RBF kernels.
On the other hand, it enables the introduction of a common-across-tasks prior
that promotes multioutput sparsity in the ELM view. This Bayesian approach
facilitates the simultaneous consideration of both the KM and ELM perspectives
enabling (i) the optimisation of the RBF kernel parameter $\gamma$ within a
probabilistic framework, (ii) the optimisation of the model complexity, and
(iii) an efficient transfer of knowledge across tasks. The experimental results
show that this framework can lead to significant performance improvements
compared to the state-of-the-art methods in multitask nonlinear regression.
Related papers
- Multi-View Oriented GPLVM: Expressiveness and Efficiency [8.459922325396155]
We introduce a new duality between the spectral density and the kernel function.
We derive a generic and expressive kernel termed Next-Gen Spectral Mixture (NG-SM) for MV-GPLVMs.
Our proposed method consistently outperforms state-of-the-art models in learning meaningful latent representations.
arXiv Detail & Related papers (2025-02-12T09:49:25Z) - One Class Restricted Kernel Machines [0.0]
Restricted kernel machines (RKMs) have demonstrated a significant impact in enhancing generalization ability in the field of machine learning.
RKMs's efficacy can be compromised by the presence of outliers and other forms of contamination within the dataset.
To address this critical issue and robustness of the model, we propose the novel one-class RKM (OCRKM)
In the framework of OCRKM, we employ an energy function akin to that of the RBM, which integrates both visible and hidden variables in a nonprobabilistic setting.
arXiv Detail & Related papers (2025-02-11T07:11:20Z) - Intuition-aware Mixture-of-Rank-1-Experts for Parameter Efficient Finetuning [50.73666458313015]
Large Language Models (LLMs) have demonstrated significant potential in performing multiple tasks in multimedia applications.
MoE has been emerged as a promising solution with its sparse architecture for effective task decoupling.
Intuition-MoR1E achieves superior efficiency and 2.15% overall accuracy improvement across 14 public datasets.
arXiv Detail & Related papers (2024-04-13T12:14:58Z) - Distribution-Dependent Rates for Multi-Distribution Learning [26.38831409926518]
Recent multi-distribution learning framework tackles this objective in a dynamic interaction with the environment.
We provide distribution-dependent guarantees in the MDL regime, that scale with suboptimality gaps and result in superior dependence on the sample size.
We devise an adaptive optimistic algorithm, LCB-DR, that showcases enhanced dependence on the gaps, mirroring the contrast between uniform and optimistic allocation in the multi-armed bandit literature.
arXiv Detail & Related papers (2023-12-20T15:50:16Z) - Low-Rank Multitask Learning based on Tensorized SVMs and LSSVMs [65.42104819071444]
Multitask learning (MTL) leverages task-relatedness to enhance performance.
We employ high-order tensors, with each mode corresponding to a task index, to naturally represent tasks referenced by multiple indices.
We propose a general framework of low-rank MTL methods with tensorized support vector machines (SVMs) and least square support vector machines (LSSVMs)
arXiv Detail & Related papers (2023-08-30T14:28:26Z) - Heterogeneous Multi-Task Gaussian Cox Processes [61.67344039414193]
We present a novel extension of multi-task Gaussian Cox processes for modeling heterogeneous correlated tasks jointly.
A MOGP prior over the parameters of the dedicated likelihoods for classification, regression and point process tasks can facilitate sharing of information between heterogeneous tasks.
We derive a mean-field approximation to realize closed-form iterative updates for estimating model parameters.
arXiv Detail & Related papers (2023-08-29T15:01:01Z) - Efficient Alternating Minimization Solvers for Wyner Multi-View
Unsupervised Learning [0.0]
We propose two novel formulations that enable the development of computational efficient solvers based the alternating principle.
The proposed solvers offer computational efficiency, theoretical convergence guarantees, local minima complexity with the number of views, and exceptional accuracy as compared with the state-of-the-art techniques.
arXiv Detail & Related papers (2023-03-28T10:17:51Z) - Multi-Modal Mutual Information Maximization: A Novel Approach for
Unsupervised Deep Cross-Modal Hashing [73.29587731448345]
We propose a novel method, dubbed Cross-Modal Info-Max Hashing (CMIMH)
We learn informative representations that can preserve both intra- and inter-modal similarities.
The proposed method consistently outperforms other state-of-the-art cross-modal retrieval methods.
arXiv Detail & Related papers (2021-12-13T08:58:03Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Efficient semidefinite-programming-based inference for binary and
multi-class MRFs [83.09715052229782]
We propose an efficient method for computing the partition function or MAP estimate in a pairwise MRF.
We extend semidefinite relaxations from the typical binary MRF to the full multi-class setting, and develop a compact semidefinite relaxation that can again be solved efficiently using the solver.
arXiv Detail & Related papers (2020-12-04T15:36:29Z) - dMFEA-II: An Adaptive Multifactorial Evolutionary Algorithm for
Permutation-based Discrete Optimization Problems [6.943742860591444]
We propose the first adaptation of the recently introduced Multifactorial Evolutionary Algorithm II (MFEA-II) to permutation-based discrete environments.
The performance of the proposed solver has been assessed over 5 different multitasking setups.
arXiv Detail & Related papers (2020-04-14T14:42:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.