Shape-Informed Clustering of Multi-Dimensional Functional Data via Deep Functional Autoencoders
- URL: http://arxiv.org/abs/2509.22969v1
- Date: Fri, 26 Sep 2025 22:10:23 GMT
- Title: Shape-Informed Clustering of Multi-Dimensional Functional Data via Deep Functional Autoencoders
- Authors: Samuel V. Singh, Shirley Coyle, Mimi Zhang,
- Abstract summary: FAEclust is a novel functional autoencoder framework for cluster analysis of multi-dimensional functional data.<n>We introduce a universal-approximator encoder that captures complex nonlinear interdependencies among component functions, and a universal-approximator decoder capable of accurately reconstructing both Euclidean and manifold-valued functional data.
- Score: 3.899824115379245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce FAEclust, a novel functional autoencoder framework for cluster analysis of multi-dimensional functional data, data that are random realizations of vector-valued random functions. Our framework features a universal-approximator encoder that captures complex nonlinear interdependencies among component functions, and a universal-approximator decoder capable of accurately reconstructing both Euclidean and manifold-valued functional data. Stability and robustness are enhanced through innovative regularization strategies applied to functional weights and biases. Additionally, we incorporate a clustering loss into the network's training objective, promoting the learning of latent representations that are conducive to effective clustering. A key innovation is our shape-informed clustering objective, ensuring that the clustering results are resistant to phase variations in the functions. We establish the universal approximation property of our non-linear decoder and validate the effectiveness of our model through extensive experiments.
Related papers
- Functional Random Forest with Adaptive Cost-Sensitive Splitting for Imbalanced Functional Data Classification [0.0]
This paper introduces Functional Random Forest with Adaptive Cost-Sensitive Splitting (FRF-ACS), a novel ensemble framework for imbalanced functional data classification.<n>To address imbalance, we incorporate a dynamic cost sensitive splitting criterion that adjusts class weights locally at each node.<n>Experiments on synthetic and real world datasets demonstrate that FRF-ACS significantly improves minority class recall and overall predictive performance.
arXiv Detail & Related papers (2025-12-02T04:57:51Z) - Improving Deepfake Detection with Reinforcement Learning-Based Adaptive Data Augmentation [60.04281435591454]
CRDA (Curriculum Reinforcement-Learning Data Augmentation) is a novel framework guiding detectors to progressively master multi-domain forgery features.<n>Central to our approach is integrating reinforcement learning and causal inference.<n>Our method significantly improves detector generalizability, outperforming SOTA methods across multiple cross-domain datasets.
arXiv Detail & Related papers (2025-11-10T12:45:52Z) - FAME: Adaptive Functional Attention with Expert Routing for Function-on-Function Regression [15.00767095565706]
Functional Attention with a Mixture-of-Experts (FAME) is an end-to-end, fully data-driven framework for function-on-function regression.<n>FAME forms continuous attention by coupling a neural controlled differential equation with MoE-driven vector fields to capture intra-functional continuity.<n>Experiments on synthetic and real-world functional-regression benchmarks show that FAME achieves state-of-the-art accuracy, strong robustness to arbitrarily sampled discrete observations.
arXiv Detail & Related papers (2025-10-01T07:53:55Z) - Self-supervised Latent Space Optimization with Nebula Variational Coding [87.20343320266215]
This paper proposes a variational inference model which leads to a clustered embedding.<n>We introduce additional variables in the latent space, called textbfnebula anchors, that guide the latent variables to form clusters during training.<n>Since each latent feature can be labeled with the closest anchor, we also propose to apply metric learning in a self-supervised way to make the separation between clusters more explicit.
arXiv Detail & Related papers (2025-06-02T08:13:32Z) - Semi-supervised Semantic Segmentation with Multi-Constraint Consistency Learning [81.02648336552421]
We propose a Multi-Constraint Consistency Learning approach to facilitate the staged enhancement of the encoder and decoder.<n>Self-adaptive feature masking and noise injection are designed in an instance-specific manner to perturb the features for robust learning of the decoder.<n> Experimental results on Pascal VOC2012 and Cityscapes datasets demonstrate that our proposed MCCL achieves new state-of-the-art performance.
arXiv Detail & Related papers (2025-03-23T03:21:33Z) - A Functional Extension of Semi-Structured Networks [2.482050942288848]
Semi-structured networks (SSNs) merge structures familiar from additive models with deep neural networks.
Inspired by large-scale datasets, this paper explores extending SSNs to functional data.
We propose a functional SSN method that retains the advantageous properties of classical functional regression approaches while also improving scalability.
arXiv Detail & Related papers (2024-10-07T18:50:18Z) - Nonlinear functional regression by functional deep neural network with kernel embedding [18.927592350748682]
We introduce a functional deep neural network with an adaptive and discretization-invariant dimension reduction method.<n>Explicit rates of approximating nonlinear smooth functionals across various input function spaces are derived.<n>We conduct numerical experiments on both simulated and real datasets to demonstrate the effectiveness and benefits of our functional net.
arXiv Detail & Related papers (2024-01-05T16:43:39Z) - Offline Reinforcement Learning with Differentiable Function
Approximation is Provably Efficient [65.08966446962845]
offline reinforcement learning, which aims at optimizing decision-making strategies with historical data, has been extensively applied in real-life applications.
We take a step by considering offline reinforcement learning with differentiable function class approximation (DFA)
Most importantly, we show offline differentiable function approximation is provably efficient by analyzing the pessimistic fitted Q-learning algorithm.
arXiv Detail & Related papers (2022-10-03T07:59:42Z) - Deep Attention-guided Graph Clustering with Dual Self-supervision [49.040136530379094]
We propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC)
We develop a dual self-supervision solution consisting of a soft self-supervision strategy with a triplet Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss.
Our method consistently outperforms state-of-the-art methods on six benchmark datasets.
arXiv Detail & Related papers (2021-11-10T06:53:03Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Invariant Feature Coding using Tensor Product Representation [75.62232699377877]
We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
arXiv Detail & Related papers (2019-06-05T07:15:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.