The Exploration of Neural Collapse under Imbalanced Data
- URL: http://arxiv.org/abs/2411.17278v1
- Date: Tue, 26 Nov 2024 09:59:34 GMT
- Title: The Exploration of Neural Collapse under Imbalanced Data
- Authors: Haixia Liu,
- Abstract summary: We consider the $L$-extended unconstrained feature model with a bias term and provide a theoretical analysis of global minimizer.
Our findings include: (1) Features within the same class converge to their class mean, similar to both the balanced case and the imbalanced case without bias.
- Score: 0.0
- License:
- Abstract: Neural collapse, a newly identified characteristic, describes a property of solutions during model training. In this paper, we explore neural collapse in the context of imbalanced data. We consider the $L$-extended unconstrained feature model with a bias term and provide a theoretical analysis of global minimizer. Our findings include: (1) Features within the same class converge to their class mean, similar to both the balanced case and the imbalanced case without bias. (2) The geometric structure is mainly on the left orthonormal transformation of the product of $L$ linear classifiers and the right transformation of the class-mean matrix. (3) Some rows of the left orthonormal transformation of the product of $L$ linear classifiers collapse to zeros and others are orthogonal, which relies on the singular values of $\hat Y=(I_K-1/N\mathbf{n}1^\top_K)D$, where $K$ is class size, $\mathbf{n}$ is the vector of sample size for each class, $D$ is the diagonal matrix whose diagonal entries are given by $\sqrt{\mathbf{n}}$. Similar results are for the columns of the right orthonormal transformation of the product of class-mean matrix and $D$. (4) The $i$-th row of the left orthonormal transformation of the product of $L$ linear classifiers aligns with the $i$-th column of the right orthonormal transformation of the product of class-mean matrix and $D$. (5) We provide the estimation of singular values about $\hat Y$. Our numerical experiments support these theoretical findings.
Related papers
- Scaling Laws in Linear Regression: Compute, Parameters, and Data [86.48154162485712]
We study the theory of scaling laws in an infinite dimensional linear regression setup.
We show that the reducible part of the test error is $Theta(-(a-1) + N-(a-1)/a)$.
Our theory is consistent with the empirical neural scaling laws and verified by numerical simulation.
arXiv Detail & Related papers (2024-06-12T17:53:29Z) - One-sided Matrix Completion from Two Observations Per Row [95.87811229292056]
We propose a natural algorithm that involves imputing the missing values of the matrix $XTX$.
We evaluate our algorithm on one-sided recovery of synthetic data and low-coverage genome sequencing.
arXiv Detail & Related papers (2023-06-06T22:35:16Z) - Learning linear dynamical systems under convex constraints [4.4351901934764975]
We consider the problem of identification of linear dynamical systems from $T$ samples of a single trajectory.
$A*$ can be reliably estimated for values $T$ smaller than what is needed for unconstrained setting.
arXiv Detail & Related papers (2023-03-27T11:49:40Z) - Dimension free ridge regression [10.434481202633458]
We revisit ridge regression on i.i.d. data in terms of the bias and variance of ridge regression in terms of the bias and variance of an equivalent' sequence model.
As a new application, we obtain a completely explicit and sharp characterization of ridge regression for Hilbert covariates with regularly varying spectrum.
arXiv Detail & Related papers (2022-10-16T16:01:05Z) - Perturbation Analysis of Randomized SVD and its Applications to
High-dimensional Statistics [8.90202564665576]
We study the statistical properties of RSVD under a general "signal-plus-noise" framework.
We derive nearly-optimal performance guarantees for RSVD when applied to three statistical inference problems.
arXiv Detail & Related papers (2022-03-19T07:26:45Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Classification of high-dimensional data with spiked covariance matrix
structure [0.2741266294612775]
We study the classification problem for high-dimensional data with $n$ observations on $p$ features.
We propose an adaptive classifier that first performs dimension reduction on the feature vectors prior to classification in the dimensionally reduced space.
We show that the resulting classifier is Bayes optimal whenever $n rightarrow infty$ and $s sqrtn-1 ln p rightarrow 0$.
arXiv Detail & Related papers (2021-10-05T11:26:53Z) - Spectral properties of sample covariance matrices arising from random
matrices with independent non identically distributed columns [50.053491972003656]
It was previously shown that the functionals $texttr(AR(z))$, for $R(z) = (frac1nXXT- zI_p)-1$ and $Ain mathcal M_p$ deterministic, have a standard deviation of order $O(|A|_* / sqrt n)$.
Here, we show that $|mathbb E[R(z)] - tilde R(z)|_F
arXiv Detail & Related papers (2021-09-06T14:21:43Z) - Truncated Linear Regression in High Dimensions [26.41623833920794]
In truncated linear regression, $(A_i, y_i)_i$ whose dependent variable equals $y_i= A_irm T cdot x* + eta_i$ is some fixed unknown vector of interest.
The goal is to recover $x*$ under some favorable conditions on the $A_i$'s and the noise distribution.
We prove that there exists a computationally and statistically efficient method for recovering $k$-sparse $n$-dimensional vectors $x*$ from $m$ truncated samples.
arXiv Detail & Related papers (2020-07-29T00:31:34Z) - Linear Time Sinkhorn Divergences using Positive Features [51.50788603386766]
Solving optimal transport with an entropic regularization requires computing a $ntimes n$ kernel matrix that is repeatedly applied to a vector.
We propose to use instead ground costs of the form $c(x,y)=-logdotpvarphi(x)varphi(y)$ where $varphi$ is a map from the ground space onto the positive orthant $RRr_+$, with $rll n$.
arXiv Detail & Related papers (2020-06-12T10:21:40Z) - A Random Matrix Analysis of Random Fourier Features: Beyond the Gaussian
Kernel, a Precise Phase Transition, and the Corresponding Double Descent [85.77233010209368]
This article characterizes the exacts of random Fourier feature (RFF) regression, in the realistic setting where the number of data samples $n$ is all large and comparable.
This analysis also provides accurate estimates of training and test regression errors for large $n,p,N$.
arXiv Detail & Related papers (2020-06-09T02:05:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.