Pragmatic Curiosity: A Hybrid Learning-Optimization Paradigm via Active Inference
- URL: http://arxiv.org/abs/2602.06104v1
- Date: Thu, 05 Feb 2026 18:42:29 GMT
- Title: Pragmatic Curiosity: A Hybrid Learning-Optimization Paradigm via Active Inference
- Authors: Yingke Li, Anjali Parashar, Enlu Zhou, Chuchu Fan,
- Abstract summary: "pragmatic curiosity" is a hybrid learning-optimization paradigm derived from active inference.<n>We demonstrate the practical effectiveness and flexibility of pragmatic curiosity on various real-world hybrid tasks.
- Score: 20.135421015458817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many engineering and scientific workflows depend on expensive black-box evaluations, requiring decision-making that simultaneously improves performance and reduces uncertainty. Bayesian optimization (BO) and Bayesian experimental design (BED) offer powerful yet largely separate treatments of goal-seeking and information-seeking, providing limited guidance for hybrid settings where learning and optimization are intrinsically coupled. We propose "pragmatic curiosity," a hybrid learning-optimization paradigm derived from active inference, in which actions are selected by minimizing the expected free energy--a single objective that couples pragmatic utility with epistemic information gain. We demonstrate the practical effectiveness and flexibility of pragmatic curiosity on various real-world hybrid tasks, including constrained system identification, targeted active search, and composite optimization with unknown preferences. Across these benchmarks, pragmatic curiosity consistently outperforms strong BO-type and BED-type baselines, delivering higher estimation accuracy, better critical-region coverage, and improved final solution quality.
Related papers
- A Quantum-inspired Hybrid Swarm Intelligence and Decision-Making for Multi-Criteria ADAS Calibration [0.0]
This work introduces a novel optimization framework based on Quantum-Inspired Hybrid Swarm Intelligence (QiHSI)<n>QiHSI uses quantum-inspired mechanisms to strengthen global search capability and preserve population diversity in complex, high-dimensional decision spaces.<n>Results show that QiHSI offers a reliable and scalable approach for intelligent ADAS calibration.
arXiv Detail & Related papers (2026-02-04T17:26:49Z) - Surrogate Ensemble in Expensive Multi-Objective Optimization via Deep Q-Learning [17.84264663466905]
Surrogate-assisted Evolutionary Algorithms(SAEAs) have shown promising robustness in solving expensive optimization problems.<n>A key aspect that impacts SAEAs' effectiveness is surrogate model selection, which in existing works is predominantly decided by human developer.<n>We propose a reinforcement learning-assisted ensemble framework, termed as SEEMOO, which is capable of scheduling different surrogate models within a single optimization process.
arXiv Detail & Related papers (2026-01-31T06:14:27Z) - Preference-Guided Diffusion for Multi-Objective Offline Optimization [64.08326521234228]
We propose a preference-guided diffusion model for offline multi-objective optimization.<n>Our guidance is a preference model trained to predict the probability that one design dominates another.<n>Our results highlight the effectiveness of classifier-guided diffusion models in generating diverse and high-quality solutions.
arXiv Detail & Related papers (2025-03-21T16:49:38Z) - Constrained Hybrid Metaheuristic Algorithm for Probabilistic Neural Networks Learning [0.3686808512438362]
This study investigates the potential of hybrid metaheuristic algorithms to enhance the training of Probabilistic Neural Networks (PNNs)<n>Traditional learning methods, such as gradient-based approaches, often struggle to optimise high-dimensional and uncertain environments.<n>We propose the constrained Hybrid Metaheuristic (cHM) algorithm, which combines multiple population-based optimisation techniques into a unified framework.
arXiv Detail & Related papers (2025-01-26T19:49:16Z) - Constrained Multi-objective Bayesian Optimization through Optimistic Constraints Estimation [10.77641869521259]
We propose a novel constrained multi-objective Bayesian optimization algorithm COMBOO that balances active learning of the level-set defined on multiple unknowns with multi-objective optimization within the feasible region.<n>We provide both theoretical analysis and empirical evidence, demonstrating the efficacy of our approach on various synthetic benchmarks and real-world applications.
arXiv Detail & Related papers (2024-11-06T03:38:00Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - Enhanced Bayesian Optimization via Preferential Modeling of Abstract
Properties [49.351577714596544]
We propose a human-AI collaborative Bayesian framework to incorporate expert preferences about unmeasured abstract properties into surrogate modeling.
We provide an efficient strategy that can also handle any incorrect/misleading expert bias in preferential judgments.
arXiv Detail & Related papers (2024-02-27T09:23:13Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Large Language Models to Enhance Bayesian Optimization [57.474613739645605]
We present LLAMBO, a novel approach that integrates the capabilities of Large Language Models (LLM) within Bayesian optimization.
At a high level, we frame the BO problem in natural language, enabling LLMs to iteratively propose and evaluate promising solutions conditioned on historical evaluations.
Our findings illustrate that LLAMBO is effective at zero-shot warmstarting, and enhances surrogate modeling and candidate sampling, especially in the early stages of search when observations are sparse.
arXiv Detail & Related papers (2024-02-06T11:44:06Z) - BOtied: Multi-objective Bayesian optimization with tied multivariate ranks [33.414682601242006]
In this paper, we show a natural connection between non-dominated solutions and the extreme quantile of the joint cumulative distribution function.
Motivated by this link, we propose the Pareto-compliant CDF indicator and the associated acquisition function, BOtied.
Our experiments on a variety of synthetic and real-world problems demonstrate that BOtied outperforms state-of-the-art MOBO acquisition functions.
arXiv Detail & Related papers (2023-06-01T04:50:06Z) - Active Learning and Bayesian Optimization: a Unified Perspective to Learn with a Goal [0.0]
We propose a general classification of adaptive sampling techniques to highlight similarities and differences between the vast families of adaptive sampling, active learning, and Bayesian optimization.
We provide guidelines to apply those learning criteria investigating the performance of different Bayesian schemes for a variety of benchmark problems.
arXiv Detail & Related papers (2023-03-02T20:22:40Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - Constrained multi-objective optimization of process design parameters in
settings with scarce data: an application to adhesive bonding [48.7576911714538]
Finding the optimal process parameters for an adhesive bonding process is challenging.
Traditional evolutionary approaches (such as genetic algorithms) are then ill-suited to solve the problem.
In this research, we successfully applied specific machine learning techniques to emulate the objective and constraint functions.
arXiv Detail & Related papers (2021-12-16T10:14:39Z) - An Empirical Study of Assumptions in Bayesian Optimisation [61.19427472792523]
In this work we rigorously analyse conventional and non-conventional assumptions inherent to Bayesian optimisation.
We conclude that the majority of hyper- parameter tuning tasks exhibit heteroscedasticity and non-stationarity.
We hope these findings may serve as guiding principles, both for practitioners and for further research in the field.
arXiv Detail & Related papers (2020-12-07T16:21:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.