Personalized Federated Learning with Heat-Kernel Enhanced Tensorized Multi-View Clustering
- URL: http://arxiv.org/abs/2509.16101v1
- Date: Fri, 19 Sep 2025 15:45:02 GMT
- Title: Personalized Federated Learning with Heat-Kernel Enhanced Tensorized Multi-View Clustering
- Authors: Kristina P. Sinaga,
- Abstract summary: We present a robust personalized learning framework for multi-view fuzzy c-means clustering.<n>Our approach integrates heat- kernel coefficients adapted from quantum field theory with Tucker decomposition and canonical polyadic decomposition.
- Score: 2.538209532048867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a robust personalized federated learning framework that leverages heat-kernel enhanced tensorized multi-view fuzzy c-means clustering with advanced tensor decomposition techniques. Our approach integrates heat-kernel coefficients adapted from quantum field theory with Tucker decomposition and canonical polyadic decomposition (CANDECOMP/PARAFAC) to transform conventional distance metrics and efficiently represent high-dimensional multi-view structures. The framework employs matriculation and vectorization techniques to facilitate the discovery of hidden structures and multilinear relationships via N-way generalized tensors. The proposed method introduces a dual-level optimization scheme: local heat-kernel enhanced fuzzy clustering with tensor decomposition operating on order-N input tensors, and federated aggregation of tensor factors with privacy-preserving personalization mechanisms. The local stage employs tensorized kernel Euclidean distance transformations and Tucker decomposition to discover client-specific patterns in multi-view tensor data, while the global aggregation process coordinates tensor factors (core tensors and factor matrices) across clients through differential privacy-preserving protocols. This tensorized approach enables efficient handling of high-dimensional multi-view data with significant communication savings through low-rank tensor approximations.
Related papers
- StoTAM: Stochastic Alternating Minimization for Tucker-Structured Tensor Sensing [7.549565266107219]
Low-rank tensor sensing is a fundamental problem with broad applications in signal processing and machine learning.<n>Existing recovery methods either operate on the full tensor variable with expensive tensor projections, or adopt factorized formulations that still rely on full-gradient computations.<n>In this work, we propose a alternating minimization algorithm that operates directly on the core tensor and factor matrices under a Tucker factorization.
arXiv Detail & Related papers (2026-01-20T02:18:20Z) - Score-Based Model for Low-Rank Tensor Recovery [49.158601255093416]
Low-rank tensor decompositions (TDs) provide an effective framework for multiway data analysis.<n>Traditional TD methods rely on predefined structural assumptions, such as CP or Tucker decompositions.<n>We propose a score-based model that eliminates the need for predefined structural or distributional assumptions.
arXiv Detail & Related papers (2025-06-27T15:05:37Z) - Low-Rank Implicit Neural Representation via Schatten-p Quasi-Norm and Jacobian Regularization [49.158601255093416]
We propose a CP-based low-rank tensor function parameterized by neural networks for implicit neural representation.<n>For smoothness, we propose a regularization term based on the spectral norm of the Jacobian and Hutchinson's trace estimator.<n>Our proposed smoothness regularization is SVD-free and avoids explicit chain rule derivations.
arXiv Detail & Related papers (2025-06-27T11:23:10Z) - Tensor Convolutional Network for Higher-Order Interaction Prediction in Sparse Tensors [74.31355755781343]
We propose TCN, an accurate and compatible tensor convolutional network that integrates seamlessly with TF methods for predicting top-k interactions.<n>We show that TCN integrated with a TF method outperforms competitors, including TF methods and a hyperedge prediction method.
arXiv Detail & Related papers (2025-03-14T18:22:20Z) - TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training [91.8932638236073]
We introduce textbfTensorGRaD, a novel method that directly addresses the memory challenges associated with large-structured weights.<n>We show that sparseGRaD reduces total memory usage by over $50%$ while maintaining and sometimes even improving accuracy.
arXiv Detail & Related papers (2025-01-04T20:51:51Z) - Adaptively Topological Tensor Network for Multi-view Subspace Clustering [36.790637575875635]
Multi-view subspace clustering uses learned self-representation tensors to exploit low rank information.
A pre-defined tensor decomposition may not fully exploit low rank information for a certain dataset.
We propose the adaptively topological tensor network (ATTN) by determining the edge ranks from the structural information of the self-representation tensor.
arXiv Detail & Related papers (2023-05-01T08:28:33Z) - Multi-View Clustering via Semi-non-negative Tensor Factorization [120.87318230985653]
We develop a novel multi-view clustering based on semi-non-negative tensor factorization (Semi-NTF)
Our model directly considers the between-view relationship and exploits the between-view complementary information.
In addition, we provide an optimization algorithm for the proposed method and prove mathematically that the algorithm always converges to the stationary KKT point.
arXiv Detail & Related papers (2023-03-29T14:54:19Z) - Fast Learnings of Coupled Nonnegative Tensor Decomposition Using Optimal Gradient and Low-rank Approximation [7.265645216663691]
We introduce a novel coupled nonnegative CANDECOMP/PARAFAC decomposition algorithm optimized by the alternating gradient method (CoNCPD-APG)
By integrating low-rank approximation with the proposed CoNCPD-APG method, the proposed algorithm can significantly decrease the computational burden without compromising decomposition quality.
arXiv Detail & Related papers (2023-02-10T08:49:36Z) - Uniform tensor clustering by jointly exploring sample affinities of
various orders [37.11798745294855]
We propose a unified tensor clustering method (UTC) that characterizes sample proximity using multiple samples' affinity.
UTC is affirmed to enhance clustering by exploiting different order affinities when processing high-dimensional data.
arXiv Detail & Related papers (2023-02-03T06:43:08Z) - Many-body Approximation for Non-negative Tensors [17.336552862741133]
We present an alternative approach to decompose non-negative tensors, called many-body approximation.
Traditional decomposition methods assume low-rankness in the representation, resulting in difficulties in global optimization and target rank selection.
arXiv Detail & Related papers (2022-09-30T09:45:43Z) - Multi-View Spectral Clustering Tailored Tensor Low-Rank Representation [105.33409035876691]
This paper explores the problem of multi-view spectral clustering (MVSC) based on tensor low-rank modeling.
We design a novel structured tensor low-rank norm tailored to MVSC.
We show that the proposed method outperforms state-of-the-art methods to a significant extent.
arXiv Detail & Related papers (2020-04-30T11:52:12Z) - Efficient Structure-preserving Support Tensor Train Machine [0.0]
Train Multi-way Multi-level Kernel (TT-MMK)
We develop the Train Multi-way Multi-level Kernel (TT-MMK), which combines the simplicity of the Polyadic decomposition, the classification power of the Dual Structure-preserving Support Machine, and the reliability of the Train Vector approximation.
We show by experiments that the TT-MMK method is usually more reliable, less sensitive to tuning parameters, and gives higher prediction accuracy in the SVM classification when benchmarked against other state-of-the-art techniques.
arXiv Detail & Related papers (2020-02-12T16:35:10Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.