Deep Intrinsic Coregionalization Multi-Output Gaussian Process Surrogate with Active Learning
- URL: http://arxiv.org/abs/2508.16434v1
- Date: Fri, 22 Aug 2025 14:46:31 GMT
- Title: Deep Intrinsic Coregionalization Multi-Output Gaussian Process Surrogate with Active Learning
- Authors: Chun-Yi Chang, Chih-Li Sung,
- Abstract summary: We propose the Deep Intrinsic Coregionalization Multi-Output Gaussian Process (deepICMGP) surrogate for computer simulation experiments involving multiple outputs.<n>We benchmark deepICMGP against state-of-the-art models, demonstrating its competitive performance.<n>We incorporate active learning strategies into deepICMGP to optimize sequential design tasks, enhancing its ability to efficiently select informative input locations for multi-output systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep Gaussian Processes (DGPs) are powerful surrogate models known for their flexibility and ability to capture complex functions. However, extending them to multi-output settings remains challenging due to the need for efficient dependency modeling. We propose the Deep Intrinsic Coregionalization Multi-Output Gaussian Process (deepICMGP) surrogate for computer simulation experiments involving multiple outputs, which extends the Intrinsic Coregionalization Model (ICM) by introducing hierarchical coregionalization structures across layers. This enables deepICMGP to effectively model nonlinear and structured dependencies between multiple outputs, addressing key limitations of traditional multi-output GPs. We benchmark deepICMGP against state-of-the-art models, demonstrating its competitive performance. Furthermore, we incorporate active learning strategies into deepICMGP to optimize sequential design tasks, enhancing its ability to efficiently select informative input locations for multi-output systems.
Related papers
- Unlocking Generalization in Polyp Segmentation with DINO Self-Attention "keys" [1.1309064441249301]
We present a framework that leverages the intrinsic robustness of DINO self-attention "key" features for robust segmentation.<n>Unlike traditional methods that extract tokens from the deepest layers of the Vision Transformer (ViT), our approach uses a simple convolutional decoder to predict polyp masks.<n>Our results, supported by a comprehensive statistical analysis, demonstrate that this pipeline achieves state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2025-12-15T14:29:47Z) - Dynamic Adaptive Shared Experts with Grouped Multi-Head Attention Mixture of Experts [10.204413386807564]
We propose a Dynamic Adaptive Shared Expert and Grouped Multi-Head Attention Hybrid Model (DASG-MoE) to enhance long-sequence modeling capabilities.<n>First, we employ the Grouped Multi-Head Attention (GMHA) mechanism to effectively reduce the computational complexity of long sequences.<n>Second, we design a Dual-Scale Shared Expert Structure (DSSE), where shallow experts use lightweight computations to quickly respond to low-dimensional features.<n>Third, we propose a hierarchical Adaptive Dynamic Routing (ADR) mechanism that dynamically selects expert levels based on feature complexity and task requirements.
arXiv Detail & Related papers (2025-09-05T02:49:15Z) - FindRec: Stein-Guided Entropic Flow for Multi-Modal Sequential Recommendation [57.577843653775]
We propose textbfFindRec (textbfFlexible unified textbfinformation textbfdisentanglement for multi-modal sequential textbfRecommendation)<n>A Stein kernel-based Integrated Information Coordination Module (IICM) theoretically guarantees distribution consistency between multimodal features and ID streams.<n>A cross-modal expert routing mechanism that adaptively filters and combines multimodal features based on their contextual relevance.
arXiv Detail & Related papers (2025-07-07T04:09:45Z) - Graft: Integrating the Domain Knowledge via Efficient Parameter Synergy for MLLMs [56.76586846269894]
Multimodal Large Language Models (MLLMs) have achieved success across various domains.<n>Despite its importance, the study of knowledge sharing among domain-specific MLLMs remains largely underexplored.<n>We propose a unified parameter integration framework that enables modular composition of expert capabilities.
arXiv Detail & Related papers (2025-06-30T15:07:41Z) - Efficient Transformed Gaussian Process State-Space Models for Non-Stationary High-Dimensional Dynamical Systems [49.819436680336786]
We propose an efficient transformed Gaussian process state-space model (ETGPSSM) for scalable and flexible modeling of high-dimensional, non-stationary dynamical systems.<n>Specifically, our ETGPSSM integrates a single shared GP with input-dependent normalizing flows, yielding an expressive implicit process prior that captures complex, non-stationary transition dynamics.<n>Our ETGPSSM outperforms existing GPSSMs and neural network-based SSMs in terms of computational efficiency and accuracy.
arXiv Detail & Related papers (2025-03-24T03:19:45Z) - EPS-MoE: Expert Pipeline Scheduler for Cost-Efficient MoE Inference [49.94169109038806]
This paper introduces EPS-MoE, a novel expert pipeline scheduler for MoE that surpasses the existing parallelism schemes.<n>Our results demonstrate at most 52.4% improvement in prefill throughput compared to existing parallel inference methods.
arXiv Detail & Related papers (2024-10-16T05:17:49Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Distributionally Robust Model-based Reinforcement Learning with Large
State Spaces [55.14361269378122]
Three major challenges in reinforcement learning are the complex dynamical systems with large state spaces, the costly data acquisition processes, and the deviation of real-world dynamics from the training environment deployment.
We study distributionally robust Markov decision processes with continuous state spaces under the widely used Kullback-Leibler, chi-square, and total variation uncertainty sets.
We propose a model-based approach that utilizes Gaussian Processes and the maximum variance reduction algorithm to efficiently learn multi-output nominal transition dynamics.
arXiv Detail & Related papers (2023-09-05T13:42:11Z) - Towards Efficient Modeling and Inference in Multi-Dimensional Gaussian
Process State-Space Models [11.13664702335756]
We propose to integrate the efficient transformed Gaussian process (ETGP) into the GPSSM to efficiently model the transition function in high-dimensional latent state space.
We also develop a corresponding variational inference algorithm that surpasses existing methods in terms of parameter count and computational complexity.
arXiv Detail & Related papers (2023-09-03T04:34:33Z) - A Sparse Expansion For Deep Gaussian Processes [33.29293167413832]
We propose an efficient scheme for accurate inference and efficient training based on a range of Gaussian Processes (TMGP)
Our numerical experiments on synthetic models and real datasets show the superior computational efficiency of DTMGP over existing DGP models.
arXiv Detail & Related papers (2021-12-11T00:59:33Z) - Scalable Multi-Task Gaussian Processes with Neural Embedding of
Coregionalization [9.873139480223367]
Multi-task regression attempts to exploit the task similarity in order to achieve knowledge transfer across related tasks for performance improvement.
The linear model of coregionalization (LMC) is a well-known MTGP paradigm which exploits the dependency of tasks through linear combination of several independent and diverse GPs.
We develop the neural embedding of coregionalization that transforms the latent GPs into a high-dimensional latent space to induce rich yet diverse behaviors.
arXiv Detail & Related papers (2021-09-20T01:28:14Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.