Tensor Polynomial Additive Model
- URL: http://arxiv.org/abs/2406.02980v1
- Date: Wed, 5 Jun 2024 06:23:11 GMT
- Title: Tensor Polynomial Additive Model
- Authors: Yang Chen, Ce Zhu, Jiani Liu, Yipeng Liu,
- Abstract summary: The TPAM preserves the inherent interpretability of additive models, transparent decision-making and the extraction of meaningful feature values.
It can enhance accuracy by up to 30%, and compression rate by up to 5 times, while maintaining a good interpretability.
- Score: 40.30621617188693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Additive models can be used for interpretable machine learning for their clarity and simplicity. However, In the classical models for high-order data, the vectorization operation disrupts the data structure, which may lead to degenerated accuracy and increased computational complexity. To deal with these problems, we propose the tensor polynomial addition model (TPAM). It retains the multidimensional structure information of high-order inputs with tensor representation. The model parameter compression is achieved using a hierarchical and low-order symmetric tensor approximation. In this way, complex high-order feature interactions can be captured with fewer parameters. Moreover, The TPAM preserves the inherent interpretability of additive models, facilitating transparent decision-making and the extraction of meaningful feature values. Additionally, leveraging TPAM's transparency and ability to handle higher-order features, it is used as a post-processing module for other interpretation models by introducing two variants for class activation maps. Experimental results on a series of datasets demonstrate that TPAM can enhance accuracy by up to 30\%, and compression rate by up to 5 times, while maintaining a good interpretability.
Related papers
- Exploring Beyond Logits: Hierarchical Dynamic Labeling Based on Embeddings for Semi-Supervised Classification [49.09505771145326]
We propose a Hierarchical Dynamic Labeling (HDL) algorithm that does not depend on model predictions and utilizes image embeddings to generate sample labels.
Our approach has the potential to change the paradigm of pseudo-label generation in semi-supervised learning.
arXiv Detail & Related papers (2024-04-26T06:00:27Z) - Quantized Fourier and Polynomial Features for more Expressive Tensor
Network Models [9.18287948559108]
We exploit the tensor structure present in the features by constraining the model weights to be an underparametrized tensor network.
We show that, for the same number of model parameters, the resulting quantized models have a higher bound on the VC-dimension as opposed to their non-quantized counterparts.
arXiv Detail & Related papers (2023-09-11T13:18:19Z) - Scaling Pre-trained Language Models to Deeper via Parameter-efficient
Architecture [68.13678918660872]
We design a more capable parameter-sharing architecture based on matrix product operator (MPO)
MPO decomposition can reorganize and factorize the information of a parameter matrix into two parts.
Our architecture shares the central tensor across all layers for reducing the model size.
arXiv Detail & Related papers (2023-03-27T02:34:09Z) - Additive Gaussian Processes Revisited [13.158344774468413]
We propose a new class of flexible non-parametric GP models with additive structure.
We show that the OAK model achieves similar or better predictive performance compared to black-box models.
With only a small number of additive low-dimensional terms, we demonstrate the OAK model achieves similar or better predictive performance compared to black-box models.
arXiv Detail & Related papers (2022-06-20T15:52:59Z) - Adversarial Audio Synthesis with Complex-valued Polynomial Networks [60.231877895663956]
Time-frequency (TF) representations in audio have been increasingly modeled real-valued networks.
We introduce complex-valued networks called APOLLO, that integrate such complex-valued representations in a natural way.
APOLLO results in $17.5%$ improvement over adversarial methods and $8.2%$ over the state-of-the-art diffusion models on SC09 in audio generation.
arXiv Detail & Related papers (2022-06-14T12:58:59Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - The Deep Generative Decoder: MAP estimation of representations improves
modeling of single-cell RNA data [0.0]
We present a simple generative model that computes model parameters and representations directly via maximum a posteriori (MAP) estimation.
The advantages of this approach are its simplicity and its capability to provide representations of much smaller dimensionality than a comparable VAE.
arXiv Detail & Related papers (2021-10-13T12:17:46Z) - Supervised Learning for Non-Sequential Data: A Canonical Polyadic
Decomposition Approach [85.12934750565971]
Efficient modelling of feature interactions underpins supervised learning for non-sequential tasks.
To alleviate this issue, it has been proposed to implicitly represent the model parameters as a tensor.
For enhanced expressiveness, we generalize the framework to allow feature mapping to arbitrarily high-dimensional feature vectors.
arXiv Detail & Related papers (2020-01-27T22:38:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.