Remote Multilinear Compressive Learning with Adaptive Compression
- URL: http://arxiv.org/abs/2109.01184v1
- Date: Thu, 2 Sep 2021 19:24:03 GMT
- Title: Remote Multilinear Compressive Learning with Adaptive Compression
- Authors: Dat Thanh Tran, Moncef Gabbouj, Alexandros Iosifidis
- Abstract summary: MultiIoT Compressive Learning (MCL) is an efficient signal acquisition and learning paradigm for multidimensional signals.
We propose a novel optimization scheme that enables such a feature for MCL models.
- Score: 107.87219371697063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multilinear Compressive Learning (MCL) is an efficient signal acquisition and
learning paradigm for multidimensional signals. The level of signal compression
affects the detection or classification performance of a MCL model, with higher
compression rates often associated with lower inference accuracy. However,
higher compression rates are more amenable to a wider range of applications,
especially those that require low operating bandwidth and minimal energy
consumption such as Internet-of-Things (IoT) applications. Many communication
protocols provide support for adaptive data transmission to maximize the
throughput and minimize energy consumption. By developing compressive sensing
and learning models that can operate with an adaptive compression rate, we can
maximize the informational content throughput of the whole application. In this
paper, we propose a novel optimization scheme that enables such a feature for
MCL models. Our proposal enables practical implementation of adaptive
compressive signal acquisition and inference systems. Experimental results
demonstrated that the proposed approach can significantly reduce the amount of
computations required during the training phase of remote learning systems but
also improve the informational content throughput via adaptive-rate sensing.
Related papers
- Accelerating Communication in Deep Learning Recommendation Model Training with Dual-Level Adaptive Lossy Compression [10.233937665979694]
DLRM is a state-of-the-art recommendation system model that has gained widespread adoption across various industry applications.
A significant bottleneck in this process is the time-consuming all-to-all communication required to collect embedding data from all devices.
We introduce a method that employs error-bounded lossy compression to reduce the communication data size and accelerate DLRM training.
arXiv Detail & Related papers (2024-07-05T05:55:18Z) - Communication-Efficient Distributed Learning with Local Immediate Error
Compensation [95.6828475028581]
We propose the Local Immediate Error Compensated SGD (LIEC-SGD) optimization algorithm.
LIEC-SGD is superior to previous works in either the convergence rate or the communication cost.
arXiv Detail & Related papers (2024-02-19T05:59:09Z) - Communication-Efficient Federated Learning through Adaptive Weight
Clustering and Server-Side Distillation [10.541541376305245]
Federated Learning (FL) is a promising technique for the collaborative training of deep neural networks across multiple devices.
FL is hindered by excessive communication costs due to repeated server-client communication during training.
We propose FedCompress, a novel approach that combines dynamic weight clustering and server-side knowledge distillation.
arXiv Detail & Related papers (2024-01-25T14:49:15Z) - LLIC: Large Receptive Field Transform Coding with Adaptive Weights for Learned Image Compression [27.02281402358164]
We propose Large Receptive Field Transform Coding with Adaptive Weights for Learned Image Compression.
We introduce a few large kernelbased depth-wise convolutions to reduce more redundancy while maintaining modest complexity.
Our LLIC models achieve state-of-the-art performances and better trade-offs between performance and complexity.
arXiv Detail & Related papers (2023-04-19T11:19:10Z) - Performance Indicator in Multilinear Compressive Learning [106.12874293597754]
The Multilinear Compressive Learning (MCL) framework was proposed to efficiently optimize the sensing and learning steps when working with multidimensional signals.
In this paper, we analyze the relationship between the input signal resolution, the number of compressed measurements and the learning performance of MCL.
arXiv Detail & Related papers (2020-09-22T11:27:50Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z) - Multilinear Compressive Learning with Prior Knowledge [106.12874293597754]
Multilinear Compressive Learning (MCL) framework combines Multilinear Compressive Sensing and Machine Learning into an end-to-end system.
Key idea behind MCL is the assumption of the existence of a tensor subspace which can capture the essential features from the signal for the downstream learning task.
In this paper, we propose a novel solution to address both of the aforementioned requirements, i.e., How to find those tensor subspaces in which the signals of interest are highly separable?
arXiv Detail & Related papers (2020-02-17T19:06:05Z) - End-to-End Facial Deep Learning Feature Compression with Teacher-Student
Enhancement [57.18801093608717]
We propose a novel end-to-end feature compression scheme by leveraging the representation and learning capability of deep neural networks.
In particular, the extracted features are compactly coded in an end-to-end manner by optimizing the rate-distortion cost.
We verify the effectiveness of the proposed model with the facial feature, and experimental results reveal better compression performance in terms of rate-accuracy.
arXiv Detail & Related papers (2020-02-10T10:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.