Invariant Feature Coding using Tensor Product Representation
- URL: http://arxiv.org/abs/1906.01857v3
- Date: Wed, 8 Mar 2023 07:57:17 GMT
- Title: Invariant Feature Coding using Tensor Product Representation
- Authors: Yusuke Mukuta and Tatsuya Harada
- Abstract summary: We prove that the group-invariant feature vector contains sufficient discriminative information when learning a linear classifier.
A novel feature model that explicitly consider group action is proposed for principal component analysis and k-means clustering.
- Score: 75.62232699377877
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, a novel feature coding method that exploits invariance for
transformations represented by a finite group of orthogonal matrices is
proposed. We prove that the group-invariant feature vector contains sufficient
discriminative information when learning a linear classifier using convex loss
minimization. Based on this result, a novel feature model that explicitly
consider group action is proposed for principal component analysis and k-means
clustering, which are commonly used in most feature coding methods, and global
feature functions. Although the global feature functions are in general complex
nonlinear functions, the group action on this space can be easily calculated by
constructing these functions as tensor-product representations of basic
representations, resulting in an explicit form of invariant feature functions.
The effectiveness of our method is demonstrated on several image datasets.
Related papers
- Equivariance with Learned Canonicalization Functions [77.32483958400282]
We show that learning a small neural network to perform canonicalization is better than using predefineds.
Our experiments show that learning the canonicalization function is competitive with existing techniques for learning equivariant functions across many tasks.
arXiv Detail & Related papers (2022-11-11T21:58:15Z) - Object Representations as Fixed Points: Training Iterative Refinement
Algorithms with Implicit Differentiation [88.14365009076907]
Iterative refinement is a useful paradigm for representation learning.
We develop an implicit differentiation approach that improves the stability and tractability of training.
arXiv Detail & Related papers (2022-07-02T10:00:35Z) - Functional Nonlinear Learning [0.0]
We propose a functional nonlinear learning (FunNoL) method to represent multivariate functional data in a lower-dimensional feature space.
We show that FunNoL provides satisfactory curve classification and reconstruction regardless of data sparsity.
arXiv Detail & Related papers (2022-06-22T23:47:45Z) - NOMAD: Nonlinear Manifold Decoders for Operator Learning [17.812064311297117]
Supervised learning in function spaces is an emerging area of machine learning research.
We show NOMAD, a novel operator learning framework with a nonlinear decoder map capable of learning finite dimensional representations of nonlinear submanifolds in function spaces.
arXiv Detail & Related papers (2022-06-07T19:52:44Z) - Provable General Function Class Representation Learning in Multitask
Bandits and MDPs [58.624124220900306]
multitask representation learning is a popular approach in reinforcement learning to boost the sample efficiency.
In this work, we extend the analysis to general function class representations.
We theoretically validate the benefit of multitask representation learning within general function class for bandits and linear MDP.
arXiv Detail & Related papers (2022-05-31T11:36:42Z) - Feature Weighted Non-negative Matrix Factorization [92.45013716097753]
We propose the Feature weighted Non-negative Matrix Factorization (FNMF) in this paper.
FNMF learns the weights of features adaptively according to their importances.
It can be solved efficiently with the suggested optimization algorithm.
arXiv Detail & Related papers (2021-03-24T21:17:17Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Fuzzy Integral = Contextual Linear Order Statistic [0.0]
The fuzzy integral is a powerful parametric nonlin-ear function with utility in a wide range of applications.
We show that it can be represented by a set of contextual linear order statistics.
arXiv Detail & Related papers (2020-07-06T16:37:36Z) - BasisVAE: Translation-invariant feature-level clustering with
Variational Autoencoders [9.51828574518325]
Variational Autoencoders (VAEs) provide a flexible and scalable framework for non-linear dimensionality reduction.
We show how a collapsed variational inference scheme leads to scalable and efficient inference for BasisVAE.
arXiv Detail & Related papers (2020-03-06T23:10:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.