Linear Classification of Neural Manifolds with Correlated Variability
- URL: http://arxiv.org/abs/2211.14961v2
- Date: Fri, 14 Jul 2023 02:06:47 GMT
- Title: Linear Classification of Neural Manifolds with Correlated Variability
- Authors: Albert J. Wakhloo, Tamara J. Sussman, SueYeon Chung
- Abstract summary: We show how correlations between object representations affect the capacity, a measure of linear separability.
We then apply our results to accurately estimate the capacity of deep network data.
- Score: 3.3946853660795893
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding how the statistical and geometric properties of neural activity
relate to performance is a key problem in theoretical neuroscience and deep
learning. Here, we calculate how correlations between object representations
affect the capacity, a measure of linear separability. We show that for
spherical object manifolds, introducing correlations between centroids
effectively pushes the spheres closer together, while introducing correlations
between the axes effectively shrinks their radii, revealing a duality between
correlations and geometry with respect to the problem of classification. We
then apply our results to accurately estimate the capacity of deep network
data.
Related papers
- Intrinsic Dimension Correlation: uncovering nonlinear connections in multimodal representations [0.4223422932643755]
This paper exploits the entanglement between intrinsic dimensionality and correlation to propose a metric that quantifies the correlation.
We first validate our method on synthetic data in controlled environments, showcasing its advantages and drawbacks compared to existing techniques.
We extend our analysis to large-scale applications in neural network representations.
arXiv Detail & Related papers (2024-06-22T10:36:04Z) - Learning Complete Topology-Aware Correlations Between Relations for Inductive Link Prediction [121.65152276851619]
We show that semantic correlations between relations are inherently edge-level and entity-independent.
We propose a novel subgraph-based method, namely TACO, to model Topology-Aware COrrelations between relations.
To further exploit the potential of RCN, we propose Complete Common Neighbor induced subgraph.
arXiv Detail & Related papers (2023-09-20T08:11:58Z) - Towards a mathematical understanding of learning from few examples with
nonlinear feature maps [68.8204255655161]
We consider the problem of data classification where the training set consists of just a few data points.
We reveal key relationships between the geometry of an AI model's feature space, the structure of the underlying data distributions, and the model's generalisation capabilities.
arXiv Detail & Related papers (2022-11-07T14:52:58Z) - Learning Neural Causal Models with Active Interventions [83.44636110899742]
We introduce an active intervention-targeting mechanism which enables a quick identification of the underlying causal structure of the data-generating process.
Our method significantly reduces the required number of interactions compared with random intervention targeting.
We demonstrate superior performance on multiple benchmarks from simulated to real-world data.
arXiv Detail & Related papers (2021-09-06T13:10:37Z) - The Interplay Between Implicit Bias and Benign Overfitting in Two-Layer
Linear Networks [51.1848572349154]
neural network models that perfectly fit noisy data can generalize well to unseen test data.
We consider interpolating two-layer linear neural networks trained with gradient flow on the squared loss and derive bounds on the excess risk.
arXiv Detail & Related papers (2021-08-25T22:01:01Z) - Causal Inference in Geosciences with Kernel Sensitivity Maps [9.800027003240674]
We propose a framework to derive cause-effect relations from pairs of variables via regression and dependence estimation.
Results in a large collection of 28 geoscience causal inference problems demonstrate the good capabilities of the method.
arXiv Detail & Related papers (2020-12-07T21:13:21Z) - Estimating Causal Effects with the Neural Autoregressive Density
Estimator [6.59529078336196]
We use neural autoregressive density estimators to estimate causal effects within the Pearl's do-calculus framework.
We show that the approach can retrieve causal effects from non-linear systems without explicitly modeling the interactions between the variables.
arXiv Detail & Related papers (2020-08-17T13:12:38Z) - Unsupervised Heterogeneous Coupling Learning for Categorical
Representation [50.1603042640492]
This work introduces a UNsupervised heTerogeneous couplIng lEarning (UNTIE) approach for representing coupled categorical data by untying the interactions between couplings.
UNTIE is efficiently optimized w.r.t. a kernel k-means objective function for unsupervised representation learning of heterogeneous and hierarchical value-to-object couplings.
The UNTIE-learned representations make significant performance improvement against the state-of-the-art categorical representations and deep representation models.
arXiv Detail & Related papers (2020-07-21T11:23:27Z) - Weakly-correlated synapses promote dimension reduction in deep neural
networks [1.7532045941271799]
How synaptic correlations affect neural correlations to produce disentangled hidden representations remains elusive.
We propose a model of dimension reduction, taking into account pairwise correlations among synapses.
Our theory determines the synaptic-correlation scaling form requiring only mathematical self-consistency.
arXiv Detail & Related papers (2020-06-20T13:11:37Z) - On Disentangled Representations Learned From Correlated Data [59.41587388303554]
We bridge the gap to real-world scenarios by analyzing the behavior of the most prominent disentanglement approaches on correlated data.
We show that systematically induced correlations in the dataset are being learned and reflected in the latent representations.
We also demonstrate how to resolve these latent correlations, either using weak supervision during training or by post-hoc correcting a pre-trained model with a small number of labels.
arXiv Detail & Related papers (2020-06-14T12:47:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.