Multilinear Compressive Learning with Prior Knowledge
- URL: http://arxiv.org/abs/2002.07203v1
- Date: Mon, 17 Feb 2020 19:06:05 GMT
- Title: Multilinear Compressive Learning with Prior Knowledge
- Authors: Dat Thanh Tran, Moncef Gabbouj, Alexandros Iosifidis
- Abstract summary: Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system.
Key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task.
In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable?
- Score: 106.12874293597754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recently proposed Multilinear Compressive Learning (MCL) framework
combines Multilinear Compressive Sensing and Machine Learning into an
end-to-end system that takes into account the multidimensional structure of the
signals when designing the sensing and feature synthesis components. The key
idea behind MCL is the assumption of the existence of a tensor subspace which
can capture the essential features from the signal for the downstream learning
task. Thus, the ability to find such a discriminative tensor subspace and
optimize the system to project the signals onto that data manifold plays an
important role in Multilinear Compressive Learning. In this paper, we propose a
novel solution to address both of the aforementioned requirements, i.e., How to
find those tensor subspaces in which the signals of interest are highly
separable? and How to optimize the sensing and feature synthesis components to
transform the original signals to the data manifold found in the first
question? In our proposal, the discovery of a high-quality data manifold is
conducted by training a nonlinear compressive learning system on the inference
task. Its knowledge of the data manifold of interest is then progressively
transferred to the MCL components via multi-stage supervised training with the
supervisory information encoding how the compressed measurements, the
synthesized features, and the predictions should be like. The proposed
knowledge transfer algorithm also comes with a semi-supervised adaption that
enables compressive learning models to utilize unlabeled data effectively.
Extensive experiments demonstrate that the proposed knowledge transfer method
can effectively train MCL models to compressively sense and synthesize better
features for the learning tasks with improved performances, especially when the
complexity of the learning task increases.
Related papers
- Understanding Auditory Evoked Brain Signal via Physics-informed Embedding Network with Multi-Task Transformer [3.261870217889503]
We propose an innovative multi-task learning model, Physics-informed Embedding Network with Multi-Task Transformer (PEMT-Net)
PEMT-Net enhances decoding performance through physics-informed embedding and deep learning techniques.
Experiments on a specific dataset demonstrate PEMT-Net's significant performance in multi-task auditory signal decoding.
arXiv Detail & Related papers (2024-06-04T06:53:32Z) - Learning with Multigraph Convolutional Filters [153.20329791008095]
We introduce multigraph convolutional neural networks (MGNNs) as stacked and layered structures where information is processed according to an MSP model.
We also develop a procedure for tractable computation of filter coefficients in the MGNNs and a low cost method to reduce the dimensionality of the information transferred between layers.
arXiv Detail & Related papers (2022-10-28T17:00:50Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - ColloSSL: Collaborative Self-Supervised Learning for Human Activity
Recognition [9.652822438412903]
A major bottleneck in training robust Human-Activity Recognition models (HAR) is the need for large-scale labeled sensor datasets.
Because labeling large amounts of sensor data is an expensive task, unsupervised and semi-supervised learning techniques have emerged.
We present a novel technique called Collaborative Self-Supervised Learning (ColloSSL) which leverages unlabeled data collected from multiple devices.
arXiv Detail & Related papers (2022-02-01T21:05:05Z) - Remote Multilinear Compressive Learning with Adaptive Compression [107.87219371697063]
MultiIoT Compressive Learning (MCL) is an efficient signal acquisition and learning paradigm for multidimensional signals.
We propose a novel optimization scheme that enables such a feature for MCL models.
arXiv Detail & Related papers (2021-09-02T19:24:03Z) - Signal Transformer: Complex-valued Attention and Meta-Learning for
Signal Recognition [33.178794056273304]
We propose a Complex-valued Attentional MEta Learner (CAMEL) for the problem few of general nonvalued problems with theoretical convergence guarantees.
This paper shows the superiority of the proposed data recognition experiments when the state is abundant small data.
arXiv Detail & Related papers (2021-06-05T03:57:41Z) - Performance Indicator in Multilinear Compressive Learning [106.12874293597754]
The Multilinear Compressive Learning (MCL) framework was proposed to efficiently optimize the sensing and learning steps when working with multidimensional signals.
In this paper, we analyze the relationship between the input signal resolution, the number of compressed measurements and the learning performance of MCL.
arXiv Detail & Related papers (2020-09-22T11:27:50Z) - MARS: Mixed Virtual and Real Wearable Sensors for Human Activity
Recognition with Multi-Domain Deep Learning Model [21.971345137218886]
We propose to build a large database based on virtual IMUs and then address technical issues by introducing a multiple-domain deep learning framework consisting of three technical parts.
In the first part, we propose to learn the single-frame human activity from the noisy IMU data with hybrid convolutional neural networks (CNNs) in the semi-supervised form.
For the second part, the extracted data features are fused according to the principle of uncertainty-aware consistency.
The transfer learning is performed in the last part based on the newly released Archive of Motion Capture as Surface Shapes (AMASS) dataset.
arXiv Detail & Related papers (2020-09-20T10:35:14Z) - A Trainable Optimal Transport Embedding for Feature Aggregation and its
Relationship to Attention [96.77554122595578]
We introduce a parametrized representation of fixed size, which embeds and then aggregates elements from a given input set according to the optimal transport plan between the set and a trainable reference.
Our approach scales to large datasets and allows end-to-end training of the reference, while also providing a simple unsupervised learning mechanism with small computational cost.
arXiv Detail & Related papers (2020-06-22T08:35:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.