COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing
- URL: http://arxiv.org/abs/2107.07225v1
- Date: Thu, 15 Jul 2021 10:05:00 GMT
- Title: COAST: COntrollable Arbitrary-Sampling NeTwork for Compressive Sensing
- Authors: Di You, Jian Zhang, Jingfen Xie, Bin Chen, Siwei Ma
- Abstract summary: We propose a novel Arbitrary-Sampling neTwork, dubbed COAST, to solve problems of arbitrary-sampling (including unseen sampling matrices) with one single model.
COAST is able to handle arbitrary sampling matrices with one single model and to achieve state-of-the-art performance with fast speed.
- Score: 27.870537087888334
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent deep network-based compressive sensing (CS) methods have achieved
great success. However, most of them regard different sampling matrices as
different independent tasks and need to train a specific model for each target
sampling matrix. Such practices give rise to inefficiency in computing and
suffer from poor generalization ability. In this paper, we propose a novel
COntrollable Arbitrary-Sampling neTwork, dubbed COAST, to solve CS problems of
arbitrary-sampling matrices (including unseen sampling matrices) with one
single model. Under the optimization-inspired deep unfolding framework, our
COAST exhibits good interpretability. In COAST, a random projection
augmentation (RPA) strategy is proposed to promote the training diversity in
the sampling space to enable arbitrary sampling, and a controllable proximal
mapping module (CPMM) and a plug-and-play deblocking (PnP-D) strategy are
further developed to dynamically modulate the network features and effectively
eliminate the blocking artifacts, respectively. Extensive experiments on widely
used benchmark datasets demonstrate that our proposed COAST is not only able to
handle arbitrary sampling matrices with one single model but also to achieve
state-of-the-art performance with fast speed. The source code is available on
https://github.com/jianzhangcs/COAST.
Related papers
- A Block Metropolis-Hastings Sampler for Controllable Energy-based Text
Generation [78.81021361497311]
We develop a novel Metropolis-Hastings (MH) sampler that proposes re-writes of the entire sequence in each step via iterative prompting of a large language model.
Our new sampler allows for more efficient and accurate sampling from a target distribution and (b) allows generation length to be determined through the sampling procedure rather than fixed in advance.
arXiv Detail & Related papers (2023-12-07T18:30:15Z) - Amortizing intractable inference in large language models [56.92471123778389]
We use amortized Bayesian inference to sample from intractable posterior distributions.
We empirically demonstrate that this distribution-matching paradigm of LLM fine-tuning can serve as an effective alternative to maximum-likelihood training.
As an important application, we interpret chain-of-thought reasoning as a latent variable modeling problem.
arXiv Detail & Related papers (2023-10-06T16:36:08Z) - An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - NL-CS Net: Deep Learning with Non-Local Prior for Image Compressive
Sensing [7.600617428107161]
Deep learning has been applied to compressive sensing (CS) of images successfully in recent years.
This paper proposes a novel CS method using non-local prior which combines the interpretability of the traditional optimization methods with the speed of network-based methods, called NL-CS Net.
arXiv Detail & Related papers (2023-05-06T02:34:28Z) - Learning Sampling Distributions for Model Predictive Control [36.82905770866734]
Sampling-based approaches to Model Predictive Control (MPC) have become a cornerstone of contemporary approaches to MPC.
We propose to carry out all operations in the latent space, allowing us to take full advantage of the learned distribution.
Specifically, we frame the learning problem as bi-level optimization and show how to train the controller with backpropagation-through-time.
arXiv Detail & Related papers (2022-12-05T20:35:36Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Distributed Dynamic Safe Screening Algorithms for Sparse Regularization [73.85961005970222]
We propose a new distributed dynamic safe screening (DDSS) method for sparsity regularized models and apply it on shared-memory and distributed-memory architecture respectively.
We prove that the proposed method achieves the linear convergence rate with lower overall complexity and can eliminate almost all the inactive features in a finite number of iterations almost surely.
arXiv Detail & Related papers (2022-04-23T02:45:55Z) - Off-the-grid data-driven optimization of sampling schemes in MRI [0.0]
We propose a novel learning based algorithm to generate efficient and physically plausible sampling patterns in MRI.
The method consists in a high dimensional optimization of a cost function defined implicitly by an algorithm.
arXiv Detail & Related papers (2020-10-05T07:06:39Z) - MOPS-Net: A Matrix Optimization-driven Network forTask-Oriented 3D Point
Cloud Downsampling [86.42733428762513]
MOPS-Net is a novel interpretable deep learning-based method for matrix optimization.
We show that MOPS-Net can achieve favorable performance against state-of-the-art deep learning-based methods over various tasks.
arXiv Detail & Related papers (2020-05-01T14:01:53Z) - Ensemble Slice Sampling: Parallel, black-box and gradient-free inference
for correlated & multimodal distributions [0.0]
Slice Sampling has emerged as a powerful Markov Chain Monte Carlo algorithm that adapts to the characteristics of the target distribution with minimal hand-tuning.
This paper introduces Ensemble Slice Sampling (ESS), a new class of algorithms that bypasses such difficulties by adaptively tuning the initial length scale.
These affine-invariant algorithms are trivial to construct, require no hand-tuning, and can easily be implemented in parallel computing environments.
arXiv Detail & Related papers (2020-02-14T19:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.