Variational Exploration Module VEM: A Cloud-Native Optimization and
Validation Tool for Geospatial Modeling and AI Workflows
- URL: http://arxiv.org/abs/2311.16196v1
- Date: Sun, 26 Nov 2023 23:07:00 GMT
- Title: Variational Exploration Module VEM: A Cloud-Native Optimization and
Validation Tool for Geospatial Modeling and AI Workflows
- Authors: Julian Kuehnert (1), Hiwot Tadesse (1), Chris Dearden (2), Rosie
Lickorish (3), Paolo Fraccaro (3), Anne Jones (3), Blair Edwards (3), Sekou
L. Remy (1), Peter Melling (4), Tim Culmer (4) ((1) IBM Research, Nairobi,
Kenya, (2) STFC Hartree Centre, Warrington, UK, (3) IBM Research, Daresbury,
UK, (4) Riskaware Ltd., Bristol, UK)
- Abstract summary: Cloud-based deployments help to scale up these modeling and AI.
We have developed the Variational Exploration Module which facilitates the optimization and validation of modeling deployed in the cloud.
The flexibility and robustness of the model-agnostic module is demonstrated using real-world applications.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Geospatial observations combined with computational models have become key to
understanding the physical systems of our environment and enable the design of
best practices to reduce societal harm. Cloud-based deployments help to scale
up these modeling and AI workflows. Yet, for practitioners to make robust
conclusions, model tuning and testing is crucial, a resource intensive process
which involves the variation of model input variables. We have developed the
Variational Exploration Module which facilitates the optimization and
validation of modeling workflows deployed in the cloud by orchestrating
workflow executions and using Bayesian and machine learning-based methods to
analyze model behavior. User configurations allow the combination of diverse
sampling strategies in multi-agent environments. The flexibility and robustness
of the model-agnostic module is demonstrated using real-world applications.
Related papers
- Structuring a Training Strategy to Robustify Perception Models with Realistic Image Augmentations [1.5723316845301678]
This report introduces a novel methodology for training with augmentations to enhance model robustness and performance in such conditions.
We present a comprehensive framework that includes identifying weak spots in Machine Learning models, selecting suitable augmentations, and devising effective training strategies.
Experimental results demonstrate improvements in model performance, as measured by commonly used metrics such as mean Average Precision (mAP) and mean Intersection over Union (mIoU) on open-source object detection and semantic segmentation models and datasets.
arXiv Detail & Related papers (2024-08-30T14:15:48Z) - Meta-Learning for Airflow Simulations with Graph Neural Networks [3.52359746858894]
We present a meta-learning approach to enhance the performance of learned models on out-of-distribution (OoD) samples.
Specifically, we set the airflow simulation in CFD over various airfoils as a meta-learning problem, where each set of examples defined on a single airfoil shape is treated as a separate task.
We experimentally demonstrate the efficiency of the proposed approach for improving the OoD generalization performance of learned models.
arXiv Detail & Related papers (2023-06-18T19:25:13Z) - Latent Variable Representation for Reinforcement Learning [131.03944557979725]
It remains unclear theoretically and empirically how latent variable models may facilitate learning, planning, and exploration to improve the sample efficiency of model-based reinforcement learning.
We provide a representation view of the latent variable models for state-action value functions, which allows both tractable variational learning algorithm and effective implementation of the optimism/pessimism principle.
In particular, we propose a computationally efficient planning algorithm with UCB exploration by incorporating kernel embeddings of latent variable models.
arXiv Detail & Related papers (2022-12-17T00:26:31Z) - When to Update Your Model: Constrained Model-based Reinforcement
Learning [50.74369835934703]
We propose a novel and general theoretical scheme for a non-decreasing performance guarantee of model-based RL (MBRL)
Our follow-up derived bounds reveal the relationship between model shifts and performance improvement.
A further example demonstrates that learning models from a dynamically-varying number of explorations benefit the eventual returns.
arXiv Detail & Related papers (2022-10-15T17:57:43Z) - Slimmable Domain Adaptation [112.19652651687402]
We introduce a simple framework, Slimmable Domain Adaptation, to improve cross-domain generalization with a weight-sharing model bank.
Our framework surpasses other competing approaches by a very large margin on multiple benchmarks.
arXiv Detail & Related papers (2022-06-14T06:28:04Z) - Extending Process Discovery with Model Complexity Optimization and
Cyclic States Identification: Application to Healthcare Processes [62.997667081978825]
The paper presents an approach to process mining providing semi-automatic support to model optimization.
A model simplification approach is proposed, which essentially abstracts the raw model at the desired granularity.
We aim to demonstrate the capabilities of the technological solution using three datasets from different applications in the healthcare domain.
arXiv Detail & Related papers (2022-06-10T16:20:59Z) - Neural-based Modeling for Performance Tuning of Spark Data Analytics [1.2251128138369254]
Performance modeling of cloud data analytics is crucial for performance tuning and other critical operations in the cloud.
Recent Deep Learning techniques bear on the process of automated performance modeling of cloud data analytics.
Our work provides an in-depth study of different modeling choices that suit our requirements.
arXiv Detail & Related papers (2021-01-20T14:58:55Z) - Learning Discrete Energy-based Models via Auxiliary-variable Local
Exploration [130.89746032163106]
We propose ALOE, a new algorithm for learning conditional and unconditional EBMs for discrete structured data.
We show that the energy function and sampler can be trained efficiently via a new variational form of power iteration.
We present an energy model guided fuzzer for software testing that achieves comparable performance to well engineered fuzzing engines like libfuzzer.
arXiv Detail & Related papers (2020-11-10T19:31:29Z) - Conditional Generative Modeling via Learning the Latent Space [54.620761775441046]
We propose a novel framework for conditional generation in multimodal spaces.
It uses latent variables to model generalizable learning patterns.
At inference, the latent variables are optimized to find optimal solutions corresponding to multiple output modes.
arXiv Detail & Related papers (2020-10-07T03:11:34Z) - PipeSim: Trace-driven Simulation of Large-Scale AI Operations Platforms [4.060731229044571]
We present a trace-driven simulation-based experimentation and analytics environment for large-scale AI systems.
Analytics data from a production-grade AI platform developed at IBM are used to build a comprehensive simulation model.
We implement the model in a standalone, discrete event simulator, and provide a toolkit for running experiments.
arXiv Detail & Related papers (2020-06-22T19:55:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.