Jacobian-Scaled K-means Clustering for Physics-Informed Segmentation of Reacting Flows
- URL: http://arxiv.org/abs/2305.01539v2
- Date: Sun, 23 Jun 2024 22:57:12 GMT
- Title: Jacobian-Scaled K-means Clustering for Physics-Informed Segmentation of Reacting Flows
- Authors: Shivam Barwey, Venkat Raman,
- Abstract summary: JSK-means clustering is a physics-informed clustering strategy centered on the K-means framework.
The algorithm is demonstrated on a complex reacting flow simulation dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work introduces Jacobian-scaled K-means (JSK-means) clustering, which is a physics-informed clustering strategy centered on the K-means framework. The method allows for the injection of underlying physical knowledge into the clustering procedure through a distance function modification: instead of leveraging conventional Euclidean distance vectors, the JSK-means procedure operates on distance vectors scaled by matrices obtained from dynamical system Jacobians evaluated at the cluster centroids. The goal of this work is to show how the JSK-means algorithm -- without modifying the input dataset -- produces clusters that capture regions of dynamical similarity, in that the clusters are redistributed towards high-sensitivity regions in phase space and are described by similarity in the source terms of samples instead of the samples themselves. The algorithm is demonstrated on a complex reacting flow simulation dataset (a channel detonation configuration), where the dynamics in the thermochemical composition space are known through the highly nonlinear and stiff Arrhenius-based chemical source terms. Interpretations of cluster partitions in both physical space and composition space reveal how JSK-means shifts clusters produced by standard K-means towards regions of high chemical sensitivity (e.g., towards regions of peak heat release rate near the detonation reaction zone). The findings presented here illustrate the benefits of utilizing Jacobian-scaled distances in clustering techniques, and the JSK-means method in particular displays promising potential for improving former partition-based modeling strategies in reacting flow (and other multi-physics) applications.
Related papers
- Constrained Centroid Clustering: A Novel Approach for Compact and Structured Partitioning [0.0]
Constrained Centroid Clustering (CCC) is a method that extends classical centroid-based clustering.<n>CCC achieves more compact clusters by reducing radial spread while preserving angular structure.<n>The proposed approach is suitable for applications requiring structured clustering with spread control, including sensor networks, collaborative robotics, and interpretable pattern analysis.
arXiv Detail & Related papers (2025-08-18T09:30:54Z) - Scalable Context-Preserving Model-Aware Deep Clustering for Hyperspectral Images [51.95768218975529]
Subspace clustering has become widely adopted for the unsupervised analysis of hyperspectral images (HSIs)<n>Recent model-aware deep subspace clustering methods often use a two-stage framework, involving the calculation of a self-representation matrix with complexity of O(n2), followed by spectral clustering.<n>We propose a scalable, context-preserving deep clustering method based on basis representation, which jointly captures local and non-local structures for efficient HSI clustering.
arXiv Detail & Related papers (2025-06-12T16:43:09Z) - Hyperbolic Fuzzy $C$-Means with Adaptive Weight-based Filtering for Clustering in Non-Euclidean Spaces [14.904264782690639]
Fuzzy $C$-Means (FCM) algorithm exhibits notable limitations in non-Euclidean spaces.<n>HypeFCM integrates the principles of fuzzy clustering with hyperbolic geometry.<n>HypeFCM significantly outperforms conventional fuzzy clustering methods in non-Euclidean settings.
arXiv Detail & Related papers (2025-05-07T11:32:53Z) - Comparison between neural network clustering, hierarchical clustering and k-means clustering: Applications using fluidic lenses [0.0]
A comparison between neural network clustering (NNC), hierarchical clustering (HC) and K-means clustering (KMC) is performed.
For NNC, a self-organizing map (SOM) training was applied to a collection of wavefront sensor reconstructions.
For HC, the data was partitioned using a combined dissimilarity-linkage matrix.
arXiv Detail & Related papers (2025-03-28T14:01:12Z) - Self-Supervised Graph Embedding Clustering [70.36328717683297]
K-means one-step dimensionality reduction clustering method has made some progress in addressing the curse of dimensionality in clustering tasks.
We propose a unified framework that integrates manifold learning with K-means, resulting in the self-supervised graph embedding framework.
arXiv Detail & Related papers (2024-09-24T08:59:51Z) - Rethinking k-means from manifold learning perspective [122.38667613245151]
We present a new clustering algorithm which directly detects clusters of data without mean estimation.
Specifically, we construct distance matrix between data points by Butterworth filter.
To well exploit the complementary information embedded in different views, we leverage the tensor Schatten p-norm regularization.
arXiv Detail & Related papers (2023-05-12T03:01:41Z) - Enhancing cluster analysis via topological manifold learning [0.3823356975862006]
We show that inferring the topological structure of a dataset before clustering can considerably enhance cluster detection.
We combine manifold learning method UMAP for inferring the topological structure with density-based clustering method DBSCAN.
arXiv Detail & Related papers (2022-07-01T15:53:39Z) - Convergence and Recovery Guarantees of the K-Subspaces Method for
Subspace Clustering [33.1464583756714]
The K-subspaces (KSS) method is a generalization of the K-means method for subspace clustering.
We show that if the initial assignment of the KSS method lies within a neighborhood of a true clustering, it converges at a superlinear rate.
arXiv Detail & Related papers (2022-06-11T15:47:21Z) - Gradient Based Clustering [72.15857783681658]
We propose a general approach for distance based clustering, using the gradient of the cost function that measures clustering quality.
The approach is an iterative two step procedure (alternating between cluster assignment and cluster center updates) and is applicable to a wide range of functions.
arXiv Detail & Related papers (2022-02-01T19:31:15Z) - Density-Based Clustering with Kernel Diffusion [59.4179549482505]
A naive density corresponding to the indicator function of a unit $d$-dimensional Euclidean ball is commonly used in density-based clustering algorithms.
We propose a new kernel diffusion density function, which is adaptive to data of varying local distributional characteristics and smoothness.
arXiv Detail & Related papers (2021-10-11T09:00:33Z) - Spatially Coherent Clustering Based on Orthogonal Nonnegative Matrix
Factorization [0.0]
We introduce in this work clustering models based on a total variation (TV) regularization procedure on the cluster membership matrix.
We provide a numerical evaluation of all proposed methods on a hyperspectral dataset obtained from a matrix-assisted laser desorption/ionisation imaging measurement.
arXiv Detail & Related papers (2021-04-25T23:40:41Z) - Multiscale Clustering of Hyperspectral Images Through Spectral-Spatial
Diffusion Geometry [9.619814126465206]
Clustering algorithms partition a dataset into groups of similar points.
The primary contribution of this article is the Multiscale Spatially-Regularized Diffusion Learning (M-SRDL) clustering algorithm.
We show that incorporating spatial regularization into a multiscale clustering framework corresponds to smoother and more coherent clusters when applied to HSI data.
arXiv Detail & Related papers (2021-03-29T17:24:28Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Clustering Binary Data by Application of Combinatorial Optimization
Heuristics [52.77024349608834]
We study clustering methods for binary data, first defining aggregation criteria that measure the compactness of clusters.
Five new and original methods are introduced, using neighborhoods and population behavior optimization metaheuristics.
From a set of 16 data tables generated by a quasi-Monte Carlo experiment, a comparison is performed for one of the aggregations using L1 dissimilarity, with hierarchical clustering, and a version of k-means: partitioning around medoids or PAM.
arXiv Detail & Related papers (2020-01-06T23:33:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.