Learning to Upsample by Learning to Sample
- URL: http://arxiv.org/abs/2308.15085v1
- Date: Tue, 29 Aug 2023 07:50:11 GMT
- Title: Learning to Upsample by Learning to Sample
- Authors: Wenze Liu, Hao Lu, Hongtao Fu, Zhiguo Cao
- Abstract summary: We present DySample, an ultra-lightweight and effective dynamic upsampler.
Compared with former kernel-based dynamic upsamplers, DySample requires no customized package and has much fewer parameters.
DySample outperforms other upsamplers across five dense prediction tasks.
- Score: 19.849631293898693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present DySample, an ultra-lightweight and effective dynamic upsampler.
While impressive performance gains have been witnessed from recent kernel-based
dynamic upsamplers such as CARAFE, FADE, and SAPA, they introduce much
workload, mostly due to the time-consuming dynamic convolution and the
additional sub-network used to generate dynamic kernels. Further, the need for
high-res feature guidance of FADE and SAPA somehow limits their application
scenarios. To address these concerns, we bypass dynamic convolution and
formulate upsampling from the perspective of point sampling, which is more
resource-efficient and can be easily implemented with the standard built-in
function in PyTorch. We first showcase a naive design, and then demonstrate how
to strengthen its upsampling behavior step by step towards our new upsampler,
DySample. Compared with former kernel-based dynamic upsamplers, DySample
requires no customized CUDA package and has much fewer parameters, FLOPs, GPU
memory, and latency. Besides the light-weight characteristics, DySample
outperforms other upsamplers across five dense prediction tasks, including
semantic segmentation, object detection, instance segmentation, panoptic
segmentation, and monocular depth estimation. Code is available at
https://github.com/tiny-smart/dysample.
Related papers
- Lighten CARAFE: Dynamic Lightweight Upsampling with Guided Reassemble Kernels [18.729177307412645]
We propose a lightweight upsampling operation, termed Dynamic Lightweight Upsampling (DLU)
Experiments on several mainstream vision tasks show that our DLU achieves comparable and even better performance to the original CARAFE.
arXiv Detail & Related papers (2024-10-29T15:35:14Z) - Dynamic Policy-Driven Adaptive Multi-Instance Learning for Whole Slide
Image Classification [26.896926631411652]
Multi-Instance Learning (MIL) has shown impressive performance for histopathology whole slide image (WSI) analysis using bags or pseudo-bags.
Existing MIL-based technologies at least suffer from one or more of the following problems: 1) requiring high storage and intensive pre-processing for numerous instances (sampling); 2) potential over-fitting with limited knowledge to predict bag labels (feature representation); 3) pseudo-bag counts and prior biases affect model robustness and generalizability (decision-making)
arXiv Detail & Related papers (2024-03-09T04:43:24Z) - Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - Sample as You Infer: Predictive Coding With Langevin Dynamics [11.515490109360012]
We present a novel algorithm for parameter learning in generic deep generative models.
Our approach modifies the standard PC algorithm to bring performance on-par and exceeding that obtained from standard variational auto-encoder training.
arXiv Detail & Related papers (2023-11-22T19:36:47Z) - Leveraging Speculative Sampling and KV-Cache Optimizations Together for Generative AI using OpenVINO [0.6144680854063939]
Inference optimizations are critical for improving user experience and reducing infrastructure costs and power consumption.
In this article, we illustrate a form of dynamic execution known as speculative sampling to reduce the overall latency of text generation.
arXiv Detail & Related papers (2023-11-08T14:08:00Z) - On Point Affiliation in Feature Upsampling [32.28512034705838]
We introduce the notion of point affiliation into feature upsampling.
We show that an upsampled point can resort to its low-res decoder neighbors and high-res encoder point to reason the affiliation.
This formulation constitutes a novel, lightweight, and universal upsampling solution.
arXiv Detail & Related papers (2023-07-17T01:59:14Z) - Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models [64.49254199311137]
We propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models.
The essence of IDPT is to develop a dynamic prompt generation module to perceive semantic prior features of each point cloud instance.
In experiments, IDPT outperforms full fine-tuning in most tasks with a mere 7% of the trainable parameters.
arXiv Detail & Related papers (2023-04-14T16:03:09Z) - Adaptive Siamese Tracking with a Compact Latent Network [219.38172719948048]
We present an intuitive viewing to simplify the Siamese-based trackers by converting the tracking task to a classification.
Under this viewing, we perform an in-depth analysis for them through visual simulations and real tracking examples.
We apply it to adjust three classical Siamese-based trackers, namely SiamRPN++, SiamFC, and SiamBAN.
arXiv Detail & Related papers (2023-02-02T08:06:02Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - BIMS-PU: Bi-Directional and Multi-Scale Point Cloud Upsampling [60.257912103351394]
We develop a new point cloud upsampling pipeline called BIMS-PU.
We decompose the up/downsampling procedure into several up/downsampling sub-steps by breaking the target sampling factor into smaller factors.
We show that our method achieves superior results to state-of-the-art approaches.
arXiv Detail & Related papers (2022-06-25T13:13:37Z) - Oops I Took A Gradient: Scalable Sampling for Discrete Distributions [53.3142984019796]
We show that this approach outperforms generic samplers in a number of difficult settings.
We also demonstrate the use of our improved sampler for training deep energy-based models on high dimensional discrete data.
arXiv Detail & Related papers (2021-02-08T20:08:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.