Linear Tensor Projection Revealing Nonlinearity
- URL: http://arxiv.org/abs/2007.03912v1
- Date: Wed, 8 Jul 2020 06:10:39 GMT
- Title: Linear Tensor Projection Revealing Nonlinearity
- Authors: Koji Maruhashi, Heewon Park, Rui Yamaguchi, Satoru Miyano
- Abstract summary: Dimensionality reduction is an effective method for learning high-dimensional data.
We propose a method that searches for a subspace that maximizes the prediction accuracy while retaining as much of the original data information as possible.
- Score: 0.294944680995069
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dimensionality reduction is an effective method for learning high-dimensional
data, which can provide better understanding of decision boundaries in
human-readable low-dimensional subspace. Linear methods, such as principal
component analysis and linear discriminant analysis, make it possible to
capture the correlation between many variables; however, there is no guarantee
that the correlations that are important in predicting data can be captured.
Moreover, if the decision boundary has strong nonlinearity, the guarantee
becomes increasingly difficult. This problem is exacerbated when the data are
matrices or tensors that represent relationships between variables. We propose
a learning method that searches for a subspace that maximizes the prediction
accuracy while retaining as much of the original data information as possible,
even if the prediction model in the subspace has strong nonlinearity. This
makes it easier to interpret the mechanism of the group of variables behind the
prediction problem that the user wants to know. We show the effectiveness of
our method by applying it to various types of data including matrices and
tensors.
Related papers
- A Perceptron-based Fine Approximation Technique for Linear Separation [0.0]
This paper presents a novel online learning method that aims at finding a separator hyperplane between data points labelled as either positive or negative.
Weights and biases of artificial neurons can directly be related to hyperplanes in high-dimensional spaces.
The presented method has proven converge; empirical results show that it can be more efficient than the Perceptron algorithm.
arXiv Detail & Related papers (2023-09-12T08:35:24Z) - Interpretation of High-Dimensional Linear Regression: Effects of
Nullspace and Regularization Demonstrated on Battery Data [0.019064981263344844]
This article considers discrete measured data of underlying smooth latent processes, as is often obtained from chemical or biological systems.
The nullspace and its interplay with regularization shapes regression coefficients.
We show that regularization and z-scoring are design choices that, if chosen corresponding to prior physical knowledge, lead to interpretable regression results.
arXiv Detail & Related papers (2023-09-01T16:20:04Z) - Nonlinear Feature Aggregation: Two Algorithms driven by Theory [45.3190496371625]
Real-world machine learning applications are characterized by a huge number of features, leading to computational and memory issues.
We propose a dimensionality reduction algorithm (NonLinCFA) which aggregates non-linear transformations of features with a generic aggregation function.
We also test the algorithms on synthetic and real-world datasets, performing regression and classification tasks, showing competitive performances.
arXiv Detail & Related papers (2023-06-19T19:57:33Z) - Learning Linear Causal Representations from Interventions under General
Nonlinear Mixing [52.66151568785088]
We prove strong identifiability results given unknown single-node interventions without access to the intervention targets.
This is the first instance of causal identifiability from non-paired interventions for deep neural network embeddings.
arXiv Detail & Related papers (2023-06-04T02:32:12Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Information Theory Measures via Multidimensional Gaussianization [7.788961560607993]
Information theory is an outstanding framework to measure uncertainty, dependence and relevance in data and systems.
It has several desirable properties for real world applications.
However, obtaining information from multidimensional data is a challenging problem due to the curse of dimensionality.
arXiv Detail & Related papers (2020-10-08T07:22:16Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Learning while Respecting Privacy and Robustness to Distributional
Uncertainties and Adversarial Data [66.78671826743884]
The distributionally robust optimization framework is considered for training a parametric model.
The objective is to endow the trained model with robustness against adversarially manipulated input data.
Proposed algorithms offer robustness with little overhead.
arXiv Detail & Related papers (2020-07-07T18:25:25Z) - Improved guarantees and a multiple-descent curve for Column Subset
Selection and the Nystr\"om method [76.73096213472897]
We develop techniques which exploit spectral properties of the data matrix to obtain improved approximation guarantees.
Our approach leads to significantly better bounds for datasets with known rates of singular value decay.
We show that both our improved bounds and the multiple-descent curve can be observed on real datasets simply by varying the RBF parameter.
arXiv Detail & Related papers (2020-02-21T00:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.