A local approach to parameter space reduction for regression and
classification tasks
- URL: http://arxiv.org/abs/2107.10867v3
- Date: Tue, 12 Mar 2024 15:07:08 GMT
- Title: A local approach to parameter space reduction for regression and
classification tasks
- Authors: Francesco Romor and Marco Tezzele and Gianluigi Rozza
- Abstract summary: We propose a new method called local active subspaces (LAS), which explores the synergies of active subspaces with supervised clustering techniques.
LAS is particularly useful for the community working on surrogate modelling.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Parameter space reduction has been proved to be a crucial tool to speed-up
the execution of many numerical tasks such as optimization, inverse problems,
sensitivity analysis, and surrogate models' design, especially when in presence
of high-dimensional parametrized systems. In this work we propose a new method
called local active subspaces (LAS), which explores the synergies of active
subspaces with supervised clustering techniques in order to carry out a more
efficient dimension reduction in the parameter space. The clustering is
performed without losing the input-output relations by introducing a distance
metric induced by the global active subspace. We present two possible
clustering algorithms: K-medoids and a hierarchical top-down approach, which is
able to impose a variety of subdivision criteria specifically tailored for
parameter space reduction tasks. This method is particularly useful for the
community working on surrogate modelling. Frequently, the parameter space
presents subdomains where the objective function of interest varies less on
average along different directions. So, it could be approximated more
accurately if restricted to those subdomains and studied separately. We tested
the new method over several numerical experiments of increasing complexity, we
show how to deal with vectorial outputs, and how to classify the different
regions with respect to the local active subspace dimension. Employing this
classification technique as a preprocessing step in the parameter space, or
output space in case of vectorial outputs, brings remarkable results for the
purpose of surrogate modelling.
Related papers
- Distributional Reduction: Unifying Dimensionality Reduction and Clustering with Gromov-Wasserstein [56.62376364594194]
Unsupervised learning aims to capture the underlying structure of potentially large and high-dimensional datasets.
In this work, we revisit these approaches under the lens of optimal transport and exhibit relationships with the Gromov-Wasserstein problem.
This unveils a new general framework, called distributional reduction, that recovers DR and clustering as special cases and allows addressing them jointly within a single optimization problem.
arXiv Detail & Related papers (2024-02-03T19:00:19Z) - Automatic Parameter Selection for Non-Redundant Clustering [11.68971888446462]
High-dimensional datasets often contain multiple meaningful clusterings in different subspaces.
We propose a framework that automatically detects the number of subspaces and clusters per subspace.
We also introduce an encoding strategy that allows us to detect outliers in each subspace.
arXiv Detail & Related papers (2023-12-19T08:53:00Z) - Parameter Efficient Fine-tuning via Cross Block Orchestration for Segment Anything Model [81.55141188169621]
We equip PEFT with a cross-block orchestration mechanism to enable the adaptation of the Segment Anything Model (SAM) to various downstream scenarios.
We propose an intra-block enhancement module, which introduces a linear projection head whose weights are generated from a hyper-complex layer.
Our proposed approach consistently improves the segmentation performance significantly on novel scenarios with only around 1K additional parameters.
arXiv Detail & Related papers (2023-11-28T11:23:34Z) - A Metaheuristic for Amortized Search in High-Dimensional Parameter
Spaces [0.0]
We propose a new metaheuristic that drives dimensionality reductions from feature-informed transformations.
DR-FFIT implements an efficient sampling strategy that facilitates a gradient-free parameter search in high-dimensional spaces.
Our test data show that DR-FFIT boosts the performances of random-search and simulated-annealing against well-established metaheuristics.
arXiv Detail & Related papers (2023-09-28T14:25:14Z) - Efficient Large-scale Nonstationary Spatial Covariance Function
Estimation Using Convolutional Neural Networks [3.5455896230714194]
We use ConvNets to derive subregions from the nonstationary data.
We employ a selection mechanism to identify subregions that exhibit similar behavior to stationary fields.
We assess the performance of the proposed method with synthetic and real datasets at a large scale.
arXiv Detail & Related papers (2023-06-20T12:17:46Z) - Extending regionalization algorithms to explore spatial process
heterogeneity [5.158953116443068]
We propose two new algorithms for spatial regime delineation, two-stage K-Models and Regional-K-Models.
Results indicate that all three algorithms achieve superior or comparable performance to existing approaches.
arXiv Detail & Related papers (2022-06-19T15:09:23Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - ParK: Sound and Efficient Kernel Ridge Regression by Feature Space
Partitions [34.576469570537995]
We introduce ParK, a new large-scale solver for kernel ridge regression.
Our approach combines partitioning with random projections and iterative optimization to reduce space and time complexity.
arXiv Detail & Related papers (2021-06-23T08:24:36Z) - Fine-Grained Dynamic Head for Object Detection [68.70628757217939]
We propose a fine-grained dynamic head to conditionally select a pixel-level combination of FPN features from different scales for each instance.
Experiments demonstrate the effectiveness and efficiency of the proposed method on several state-of-the-art detection benchmarks.
arXiv Detail & Related papers (2020-12-07T08:16:32Z) - Learnable Subspace Clustering [76.2352740039615]
We develop a learnable subspace clustering paradigm to efficiently solve the large-scale subspace clustering problem.
The key idea is to learn a parametric function to partition the high-dimensional subspaces into their underlying low-dimensional subspaces.
To the best of our knowledge, this paper is the first work to efficiently cluster millions of data points among the subspace clustering methods.
arXiv Detail & Related papers (2020-04-09T12:53:28Z) - Supervised Hyperalignment for multi-subject fMRI data alignment [81.8694682249097]
This paper proposes a Supervised Hyperalignment (SHA) method to ensure better functional alignment for MVP analysis.
Experiments on multi-subject datasets demonstrate that SHA method achieves up to 19% better performance for multi-class problems.
arXiv Detail & Related papers (2020-01-09T09:17:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.