Mlr3spatiotempcv: Spatiotemporal resampling methods for machine learning
in R
- URL: http://arxiv.org/abs/2110.12674v1
- Date: Mon, 25 Oct 2021 06:48:29 GMT
- Title: Mlr3spatiotempcv: Spatiotemporal resampling methods for machine learning
in R
- Authors: Patrick Schratz, Marc Becker, Michel Lang and Alexander Brenning
- Abstract summary: This package integrates the proglangR package directly into the mlr3 machine-learning framework.
One advantage is the use of a consistent recommendations in an overarching machine-learning toolkit.
- Score: 63.26453219947887
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spatial and spatiotemporal machine-learning models require a suitable
framework for their model assessment, model selection, and hyperparameter
tuning, in order to avoid error estimation bias and over-fitting. This
contribution reviews the state-of-the-art in spatial and spatiotemporal CV, and
introduces the \proglang{R} package mlr3spatiotempcv as an extension package of
the machine-learning framework \textbf{mlr3}. Currently various \proglang{R}
packages implementing different spatiotemporal partitioning strategies exist:
\pkg{blockCV}, \pkg{CAST}, \pkg{kmeans} and \pkg{sperrorest}. The goal of
\pkg{mlr3spatiotempcv} is to gather the available spatiotemporal resampling
methods in \proglang{R} and make them available to users through a simple and
common interface. This is made possible by integrating the package directly
into the \pkg{mlr3} machine-learning framework, which already has support for
generic non-spatiotemporal resampling methods such as random partitioning. One
advantage is the use of a consistent nomenclature in an overarching
machine-learning toolkit instead of a varying package-specific syntax, making
it easier for users to choose from a variety of spatiotemporal resampling
methods. This package avoids giving recommendations which method to use in
practice as this decision depends on the predictive task at hand, the
autocorrelation within the data, and the spatial structure of the sampling
design or geographic objects being studied.
Related papers
- ModelMix: A New Model-Mixup Strategy to Minimize Vicinal Risk across Tasks for Few-scribble based Cardiac Segmentation [32.19827368497988]
We introduce a new approach to few-scribble supervised segmentation based on model parameter, termed as ModelMix.
ModelMix constructs virtual models using convex combinations of convolutional parameters from separate encoders.
We then regularize the model set to minimize vicinal risk across tasks in both unsupervised and scribble-supervised way.
arXiv Detail & Related papers (2024-06-19T05:58:11Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Beyond Prototypes: Semantic Anchor Regularization for Better
Representation Learning [82.29761875805369]
One of the ultimate goals of representation learning is to achieve compactness within a class and well-separability between classes.
We propose a novel perspective to use pre-defined class anchors serving as feature centroid to unidirectionally guide feature learning.
The proposed Semantic Anchor Regularization (SAR) can be used in a plug-and-play manner in the existing models.
arXiv Detail & Related papers (2023-12-19T05:52:38Z) - Mixed moving average field guided learning for spatio-temporal data [0.0]
We define a novel Bayesian-temporal embedding and a theory-guided machine learning approach to make ensemble forecasts.
We use Lipschitz predictors to determine fixed-time and any-time PAC in the batch learning setting.
We then test the performance of our learning methodology by using linear predictors and data sets simulated from a dependence- Ornstein-Uhlenbeck process.
arXiv Detail & Related papers (2023-01-02T16:11:05Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Decoupled Multi-task Learning with Cyclical Self-Regulation for Face
Parsing [71.19528222206088]
We propose a novel Decoupled Multi-task Learning with Cyclical Self-Regulation for face parsing.
Specifically, DML-CSR designs a multi-task model which comprises face parsing, binary edge, and category edge detection.
Our method achieves the new state-of-the-art performance on the Helen, CelebA-HQ, and LapaMask datasets.
arXiv Detail & Related papers (2022-03-28T02:12:30Z) - PyChEst: a Python package for the consistent retrospective estimation of
distributional changes in piece-wise stationary time series [2.398608007786179]
We introduce PyChEst, a Python package which provides tools for the simultaneous estimation of multiple changepoints in the distribution of piece-wise stationary time series.
Nonparametric algorithms implemented are provably consistent in a general framework.
We illustrate this distinguishing feature by comparing the performance of the package against state-of-the-art models designed for a setting where the samples are independently and identically distributed.
arXiv Detail & Related papers (2021-12-20T14:39:39Z) - Near-Optimal Reward-Free Exploration for Linear Mixture MDPs with
Plug-in Solver [32.212146650873194]
We provide approaches to learn an RL model efficiently without the guidance of a reward signal.
In particular, we take a plug-in solver approach, where we focus on learning a model in the exploration phase.
We show that, by establishing a novel exploration algorithm, the plug-in approach learns a model by taking $tildeO(d2H3/epsilon2)$ interactions with the environment.
arXiv Detail & Related papers (2021-10-07T07:59:50Z) - The mbsts package: Multivariate Bayesian Structural Time Series Models
in R [2.8935588665357086]
This paper demonstrates how to use the R package mbsts for MBSTS modeling.
The MBSTS model has wide applications and is ideal for feature selection, time series forecasting, nowcasting, inferring causal impact, and others.
arXiv Detail & Related papers (2021-06-26T15:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.