Sliding Window Informative Canonical Correlation Analysis
- URL: http://arxiv.org/abs/2507.17921v1
- Date: Wed, 23 Jul 2025 20:35:15 GMT
- Title: Sliding Window Informative Canonical Correlation Analysis
- Authors: Arvind Prasadan,
- Abstract summary: Canonical correlation analysis (CCA) is a technique for finding correlated sets of features between two datasets.<n>We propose a novel extension of CCA to the online, streaming data setting: Sliding Window Informative Canonical Correlation Analysis (SWICCA)
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Canonical correlation analysis (CCA) is a technique for finding correlated sets of features between two datasets. In this paper, we propose a novel extension of CCA to the online, streaming data setting: Sliding Window Informative Canonical Correlation Analysis (SWICCA). Our method uses a streaming principal component analysis (PCA) algorithm as a backend and uses these outputs combined with a small sliding window of samples to estimate the CCA components in real time. We motivate and describe our algorithm, provide numerical simulations to characterize its performance, and provide a theoretical performance guarantee. The SWICCA method is applicable and scalable to extremely high dimensions, and we provide a real-data example that demonstrates this capability.
Related papers
- InfoDPCCA: Information-Theoretic Dynamic Probabilistic Canonical Correlation Analysis [20.656410520966986]
InfoDPCCA is a framework designed to model two interdependent sequences of observations.<n>We introduce a two-step training scheme to bridge the gap between information-theoretic representation learning and generative modeling.<n>We demonstrate that InfoDPCCA excels as a tool for representation learning.
arXiv Detail & Related papers (2025-06-10T15:13:48Z) - DA-Flow: Dual Attention Normalizing Flow for Skeleton-based Video Anomaly Detection [52.74152717667157]
We propose a lightweight module called Dual Attention Module (DAM) for capturing cross-dimension interaction relationships in-temporal skeletal data.
It employs the frame attention mechanism to identify the most significant frames and the skeleton attention mechanism to capture broader relationships across fixed partitions with minimal parameters and flops.
arXiv Detail & Related papers (2024-06-05T06:18:03Z) - Unifying Feature and Cost Aggregation with Transformers for Semantic and Visual Correspondence [51.54175067684008]
This paper introduces a Transformer-based integrative feature and cost aggregation network designed for dense matching tasks.
We first show that feature aggregation and cost aggregation exhibit distinct characteristics and reveal the potential for substantial benefits stemming from the judicious use of both aggregation processes.
Our framework is evaluated on standard benchmarks for semantic matching, and also applied to geometric matching, where we show that our approach achieves significant improvements compared to existing methods.
arXiv Detail & Related papers (2024-03-17T07:02:55Z) - KPIs-Based Clustering and Visualization of HPC jobs: a Feature Reduction
Approach [0.0]
HPC systems need to be constantly monitored to ensure their stability.
The monitoring systems collect a tremendous amount of data about different parameters or Key Performance Indicators (KPIs), such as resource usage, IO waiting time, etc.
A proper analysis of this data, usually stored as time series, can provide insight in choosing the right management strategies as well as the early detection of issues.
arXiv Detail & Related papers (2023-12-11T17:13:54Z) - Efficient Semantic Matching with Hypercolumn Correlation [58.92933923647451]
HCCNet is an efficient yet effective semantic matching method.
It exploits the full potential of multi-scale correlation maps.
It eschews the reliance on expensive match-wise relationship mining on the 4D correlation map.
arXiv Detail & Related papers (2023-11-07T20:40:07Z) - A Bayesian Methodology for Estimation for Sparse Canonical Correlation [0.0]
Canonical Correlation Analysis (CCA) is a statistical procedure for identifying relationships between data sets.
ScSCCA is a rapidly emerging methodological area that aims for robust modeling of the interrelations between the different data modalities.
We propose a novel ScSCCA approach where we employ a Bayesian infinite factor model and aim to achieve robust estimation.
arXiv Detail & Related papers (2023-10-30T15:14:25Z) - Revisiting the Evaluation of Image Synthesis with GANs [55.72247435112475]
This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models.
In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set.
arXiv Detail & Related papers (2023-04-04T17:54:32Z) - The effectiveness of factorization and similarity blending [0.0]
Collaborative Filtering (CF) is a technique which allows to leverage past users' preferences data to identify behavioural patterns and exploit them to predict custom recommendations.
We show that blending factorization-based and similarity-based approaches can lead to a significant error decrease (-9.4%) on stand-alone models.
We propose a novel extension of a similarity model, SCSR, which consistently reduce the complexity of the original algorithm.
arXiv Detail & Related papers (2022-09-16T13:11:27Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z) - Out-of-distribution Generalization via Partial Feature Decorrelation [72.96261704851683]
We present a novel Partial Feature Decorrelation Learning (PFDL) algorithm, which jointly optimize a feature decomposition network and the target image classification model.
The experiments on real-world datasets demonstrate that our method can improve the backbone model's accuracy on OOD image classification datasets.
arXiv Detail & Related papers (2020-07-30T05:48:48Z) - Multiview Representation Learning for a Union of Subspaces [38.68763142172997]
We show that a proposed model and a set of simple mixtures yield improvements over standard CCA.
Our correlation-based objective meaningfully generalizes the CCA objective to a mixture of CCA models.
arXiv Detail & Related papers (2019-12-30T00:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.