Performance Indicator in Multilinear Compressive Learning
- URL: http://arxiv.org/abs/2009.10456v1
- Date: Tue, 22 Sep 2020 11:27:50 GMT
- Title: Performance Indicator in Multilinear Compressive Learning
- Authors: Dat Thanh Tran, Moncef Gabbouj, Alexandros Iosifidis
- Abstract summary: The Multilinear Compressive Learning (MCL) framework was proposed to efficiently optimize the sensing and learning steps when working with multidimensional signals.
In this paper, we analyze the relationship between the input signal resolution, the number of compressed measurements and the learning performance of MCL.
- Score: 106.12874293597754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the Multilinear Compressive Learning (MCL) framework was proposed
to efficiently optimize the sensing and learning steps when working with
multidimensional signals, i.e. tensors. In Compressive Learning in general, and
in MCL in particular, the number of compressed measurements captured by a
compressive sensing device characterizes the storage requirement or the
bandwidth requirement for transmission. This number, however, does not
completely characterize the learning performance of a MCL system. In this
paper, we analyze the relationship between the input signal resolution, the
number of compressed measurements and the learning performance of MCL. Our
empirical analysis shows that the reconstruction error obtained at the
initialization step of MCL strongly correlates with the learning performance,
thus can act as a good indicator to efficiently characterize learning
performances obtained from different sensor configurations without optimizing
the entire system.
Related papers
- CoMMIT: Coordinated Instruction Tuning for Multimodal Large Language Models [68.64605538559312]
In this paper, we analyze the MLLM instruction tuning from both theoretical and empirical perspectives.
Inspired by our findings, we propose a measurement to quantitatively evaluate the learning balance.
In addition, we introduce an auxiliary loss regularization method to promote updating of the generation distribution of MLLMs.
arXiv Detail & Related papers (2024-07-29T23:18:55Z) - Understanding Multimodal Contrastive Learning and Incorporating Unpaired
Data [19.72282903349282]
We show a general class of nonlinear loss functions for multimodal contrastive learning (MMCL)
We quantitatively show that the feature learning ability of MMCL can be better than that of unimodal contrastive learning applied to each modality.
When we have access to additional unpaired data, we propose a new MMCL loss that incorporates additional unpaired datasets.
arXiv Detail & Related papers (2023-02-13T10:11:05Z) - Gradient-Based Learning of Discrete Structured Measurement Operators for
Signal Recovery [16.740247586153085]
We show how to leverage gradient-based learning to solve discrete optimization problems.
Our approach is formalized by GLODISMO (Gradient-based Learning of DIscrete Structured Measurement Operators)
We empirically demonstrate the performance and flexibility of GLODISMO in several signal recovery applications.
arXiv Detail & Related papers (2022-02-07T18:27:08Z) - Adaptive neighborhood Metric learning [184.95321334661898]
We propose a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML)
ANML can be used to learn both the linear and deep embeddings.
The emphlog-exp mean function proposed in our method gives a new perspective to review the deep metric learning methods.
arXiv Detail & Related papers (2022-01-20T17:26:37Z) - Remote Multilinear Compressive Learning with Adaptive Compression [107.87219371697063]
MultiIoT Compressive Learning (MCL) is an efficient signal acquisition and learning paradigm for multidimensional signals.
We propose a novel optimization scheme that enables such a feature for MCL models.
arXiv Detail & Related papers (2021-09-02T19:24:03Z) - Learning Invariant Representations using Inverse Contrastive Loss [34.93395633215398]
We introduce a class of losses for learning representations that are invariant to some extraneous variable of interest.
We show that if the extraneous variable is binary, then optimizing ICL is equivalent to optimizing a regularized MMD divergence.
arXiv Detail & Related papers (2021-02-16T18:29:28Z) - ECML: An Ensemble Cascade Metric Learning Mechanism towards Face
Verification [50.137924223702264]
In particular, hierarchical metric learning is executed in the cascade way to alleviate underfitting.
Considering the feature distribution characteristics of faces, a robust Mahalanobis metric learning method (RMML) with closed-form solution is additionally proposed.
EC-RMML is superior to state-of-the-art metric learning methods for face verification.
arXiv Detail & Related papers (2020-07-11T08:47:07Z) - Multilinear Compressive Learning with Prior Knowledge [106.12874293597754]
Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system.
Key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task.
In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable?
arXiv Detail & Related papers (2020-02-17T19:06:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.