What is the relation between Slow Feature Analysis and the Successor Representation?
- URL: http://arxiv.org/abs/2409.16991v2
- Date: Wed, 12 Mar 2025 10:41:49 GMT
- Title: What is the relation between Slow Feature Analysis and the Successor Representation?
- Authors: Eddie Seabrook, Laurenz Wiskott,
- Abstract summary: Slow feature analysis (SFA) is an unsupervised method for extracting representations from time series data.<n>The successor representation (SR) is a method for representing states in a Markov decision process (MDP) based on transition statistics.<n>This work studies their connection along these two axes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Slow feature analysis (SFA) is an unsupervised method for extracting representations from time series data. The successor representation (SR) is a method for representing states in a Markov decision process (MDP) based on transition statistics. While SFA and SR stem from distinct areas of machine learning, they share important properties, both in terms of their mathematics and the types of information they are sensitive to. This work studies their connection along these two axes. In particular, both SFA and SR are explored analytically, and in the setting of a one-hot encoded MDP, a formal equivalence is demonstrated in terms of the grid-like representations that occur as solutions/eigenvectors. Moreover, it is shown that the columns of the matrices involved in SFA contain place-like representations, which are formally distinct from place-cell models that have already been defined using SFA.
Related papers
- Evaluating and Designing Sparse Autoencoders by Approximating Quasi-Orthogonality [3.9230690073443166]
We show that the magnitude of sparse feature vectors can be approximated using their corresponding dense vector with a closed-form error bound.
We introduce Approximate Activation Feature (AFA), which approximates the magnitude of the ground-truth sparse feature vector.
We demonstrate that top-AFA SAEs achieve reconstruction loss comparable to that of state-of-the-art top-k SAEs.
arXiv Detail & Related papers (2025-03-31T16:22:11Z) - MASCOTS: Model-Agnostic Symbolic COunterfactual explanations for Time Series [4.664512594743523]
We introduce MASCOTS, a method that generates meaningful and diverse counterfactual observations in a model-agnostic manner.
By operating in a symbolic feature space, MASCOTS enhances interpretability while preserving fidelity to the original data and model.
arXiv Detail & Related papers (2025-03-28T12:48:12Z) - Model Alignment Search [0.0]
We introduce Model Alignment Search (MAS), a method for causally exploring distributed representational similarity as it relates to behavior.
We first show that the method can be used to transfer values of specific causal variables between networks with different training seeds and different architectures.
We then explore open questions in number cognition by comparing different types of numeric representations in models trained on structurally different tasks.
arXiv Detail & Related papers (2025-01-10T18:39:29Z) - Symbolic Disentangled Representations for Images [83.88591755871734]
We propose ArSyD (Architecture for Disentanglement), which represents each generative factor as a vector of the same dimension as the resulting representation.
We study ArSyD on the dSprites and CLEVR datasets and provide a comprehensive analysis of the learned symbolic disentangled representations.
arXiv Detail & Related papers (2024-12-25T09:20:13Z) - Identifiability Analysis of Linear ODE Systems with Hidden Confounders [45.14890063421295]
This paper presents a systematic analysis of identifiability in linear ODE systems incorporating hidden confounders.
In the first case, latent confounders exhibit no causal relationships, yet their evolution adheres to specific forms.
Subsequently, we extend this analysis to encompass scenarios where hidden confounders exhibit causal dependencies.
arXiv Detail & Related papers (2024-10-29T10:15:56Z) - FlowSDF: Flow Matching for Medical Image Segmentation Using Distance Transforms [60.195642571004804]
We introduce FlowSDF, an image-guided conditional flow matching framework, to represent an implicit distribution of segmentation masks.
Our framework enables accurate sampling of segmentation masks and the computation of relevant statistical measures.
arXiv Detail & Related papers (2024-05-28T11:47:12Z) - iSCAN: Identifying Causal Mechanism Shifts among Nonlinear Additive
Noise Models [48.33685559041322]
This paper focuses on identifying the causal mechanism shifts in two or more related datasets over the same set of variables.
Code implementing the proposed method is open-source and publicly available at https://github.com/kevinsbello/iSCAN.
arXiv Detail & Related papers (2023-06-30T01:48:11Z) - Understanding Augmentation-based Self-Supervised Representation Learning
via RKHS Approximation and Regression [53.15502562048627]
Recent work has built the connection between self-supervised learning and the approximation of the top eigenspace of a graph Laplacian operator.
This work delves into a statistical analysis of augmentation-based pretraining.
arXiv Detail & Related papers (2023-06-01T15:18:55Z) - Opening the random forest black box by the analysis of the mutual impact
of features [0.0]
We propose two novel approaches that focus on the mutual impact of features in random forests.
MFI and MIR are very promising to shed light on the complex relationships between features and outcome.
arXiv Detail & Related papers (2023-04-05T15:03:46Z) - Abstract Interpretation-Based Feature Importance for SVMs [8.879921160392737]
We propose a symbolic representation for support vector machines (SVMs) by means of abstract interpretation.
We derive a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset of the accuracy of the SVM.
Our experimental results show that, independently of the accuracy of the SVM, our AFI measure correlates much more strongly with the stability of the SVM to feature perturbations than feature importance measures widely available in machine learning software.
arXiv Detail & Related papers (2022-10-22T13:57:44Z) - Mining Relations among Cross-Frame Affinities for Video Semantic
Segmentation [87.4854250338374]
We explore relations among affinities in two aspects: single-scale intrinsic correlations and multi-scale relations.
Our experiments demonstrate that the proposed method performs favorably against state-of-the-art VSS methods.
arXiv Detail & Related papers (2022-07-21T12:12:36Z) - GSR: A Generalized Symbolic Regression Approach [13.606672419862047]
Generalized Symbolic Regression presented in this paper.
We show that our GSR method outperforms several state-of-the-art methods on the well-known Symbolic Regression benchmark problem sets.
We highlight the strengths of GSR by introducing SymSet, a new SR benchmark set which is more challenging relative to the existing benchmarks.
arXiv Detail & Related papers (2022-05-31T07:20:17Z) - Self-Supervised Learning Disentangled Group Representation as Feature [82.07737719232972]
We show that existing Self-Supervised Learning (SSL) only disentangles simple augmentation features such as rotation and colorization.
We propose an iterative SSL algorithm: Iterative Partition-based Invariant Risk Minimization (IP-IRM)
We prove that IP-IRM converges to a fully disentangled representation and show its effectiveness on various benchmarks.
arXiv Detail & Related papers (2021-10-28T16:12:33Z) - Computing on Functions Using Randomized Vector Representations [4.066849397181077]
We call this new function encoding and computing framework Vector Function Architecture (VFA)
Our analyses and results suggest that VFAs constitute a powerful new framework for representing and manipulating functions in distributed neural systems.
arXiv Detail & Related papers (2021-09-08T04:39:48Z) - Feature Decomposition and Reconstruction Learning for Effective Facial
Expression Recognition [80.17419621762866]
We propose a novel Feature Decomposition and Reconstruction Learning (FDRL) method for effective facial expression recognition.
FDRL consists of two crucial networks: a Feature Decomposition Network (FDN) and a Feature Reconstruction Network (FRN)
arXiv Detail & Related papers (2021-04-12T02:22:45Z) - Feature Selection for Imbalanced Data with Deep Sparse Autoencoders
Ensemble [0.5352699766206808]
Class imbalance is a common issue in many domain applications of learning algorithms.
We propose a filtering FS algorithm ranking feature importance on the basis of the Reconstruction Error of a Deep Sparse AutoEncoders Ensemble.
We empirically demonstrate the efficacy of our algorithm in several experiments on high-dimensional datasets of varying sample size.
arXiv Detail & Related papers (2021-03-22T09:17:08Z) - Fundamental Limits and Tradeoffs in Invariant Representation Learning [99.2368462915979]
Many machine learning applications involve learning representations that achieve two competing goals.
Minimax game-theoretic formulation represents a fundamental tradeoff between accuracy and invariance.
We provide an information-theoretic analysis of this general and important problem under both classification and regression settings.
arXiv Detail & Related papers (2020-12-19T15:24:04Z) - Transformer-based Multi-Aspect Modeling for Multi-Aspect Multi-Sentiment
Analysis [56.893393134328996]
We propose a novel Transformer-based Multi-aspect Modeling scheme (TMM), which can capture potential relations between multiple aspects and simultaneously detect the sentiment of all aspects in a sentence.
Our method achieves noticeable improvements compared with strong baselines such as BERT and RoBERTa.
arXiv Detail & Related papers (2020-11-01T11:06:31Z) - Deep Representational Similarity Learning for analyzing neural
signatures in task-based fMRI dataset [81.02949933048332]
This paper develops Deep Representational Similarity Learning (DRSL), a deep extension of Representational Similarity Analysis (RSA)
DRSL is appropriate for analyzing similarities between various cognitive tasks in fMRI datasets with a large number of subjects.
arXiv Detail & Related papers (2020-09-28T18:30:14Z) - FLAMBE: Structural Complexity and Representation Learning of Low Rank
MDPs [53.710405006523274]
This work focuses on the representation learning question: how can we learn such features?
Under the assumption that the underlying (unknown) dynamics correspond to a low rank transition matrix, we show how the representation learning question is related to a particular non-linear matrix decomposition problem.
We develop FLAMBE, which engages in exploration and representation learning for provably efficient RL in low rank transition models.
arXiv Detail & Related papers (2020-06-18T19:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.